Heap Size: Elephant in the room

Heap size is a common, yet neglected problem whenever it comes to Salesforce Apex development. Heap size in actual is the count of memory used by the variables, object instances in an Apex class. In order to store these objects & variables, memory is allocated in Salesforce which is derived from allocated Heap. The Heap size truly depends on the type of transaction, Salesforce provides a heap size limit of 6MB for synchronous transaction and 12 MB for asynchronous transaction. Whenever, too much data is stored during processing an error occurs prompting “Apex heap size too large”. You must have encountered this error at least once if you are a developer.

Here are a few tips to ensure that the heap limit does not exceed the maximum size and your code runs smoothly

Vernica1

Continue reading

Posted in apex develeopment, Salesforce, salesforce development, salesforce integration, Salesforce.com. Tagged with , , .

Evaluate Lightning Web Components In Online Playground

Salesforce has introduced an online editor to try out Lightning Web Components. Lightning Web Components is the new UI development approach that Salesforce has; this is an evolution from the current Aura framework. Salesforce is to allow both Lightning Web Components and Aura frameworks to exist side-by-side.

To not just experiment, but also evaluate design considerations playground can be used. By using the instructions here, new Lightning Web Components can also be created on the interactive code Playground editor.

Dual Listbox is a control that that just shows a bunch of values on the left-side and allows user to choose the values and move them to right-side.

Kabilan1

Can dual listbox handle thousands of values on the left-side? Playground makes it quite easy to change the values of the values loaded.

Continue reading

Posted in Salesforce, Salesforce Lightning, Salesforce.com, UI. Tagged with , , , .

Recurrent Neural Network with Long Short-Term Memory

What is a Neuron?

In Biological term, Neurons is the unit of nervous system which is responsible for flow of message in the form of electrical impulse in Human brain. So, Neuron is responsible for Human intelligence. But, in Today’s scenario it is used in Artificial Intelligence as well. Recurrent Neural Network (RNN) is a class of Artificial neural network in which connections between the neurons form a directed graph, or in simpler words, having a self-loop in the hidden layers. This helps RNNs to utilize the previous state of the hidden neurons to learn current state. Along with the current input, RNNs utilize the information they have learnt previously. Among all the neural networks, they are the only one with an internal memory. A usual RNN has a short-term memory. Because of their internal memory, RNN are able to remember things.

Pawan1

Continue reading

Posted in Application Architecture, Salesforce AI, salesforce development, Salesforce Einstein, Salesforce Einstein, salesforce for healthcare, Salesforce Machine Learning. Tagged with , , , , .

Data Breach

A data breach, or data leak, is a security event in which protected data is accessed by or disclosed to unauthorized viewers. A data breach is different from data loss, which is when data can be no longer accessed because of hardware failure, deletion or other cause. Protected data can include information about individual customers, or employees, such as personally identifiable information (PII), personal health information, payment card information and Social Security numbers. It can also include corporate information or intellectual property (IP), such as trade secrets, details about manufacturing processes, supplier and customer data, information about mergers and acquisitions, or data about lawsuits or other litigation.

Data breaches are not always intentional. Users can accidentally send protected data to the wrong email address or upload it to the wrong share; in fact, mistakes account for 17% of breaches, according to the well-known Verizon’s 2018 Data breach investigation Report. But the report found that most breaches are deliberate and financially motivated. While different methods are used to gain access to sensitive data, 28% of breaches involve insiders, according to the Verizon report.

Continue reading

Posted in Application Security, Data Security, Ethical Hacking. Tagged with , , .

Developing Visualforce Apps using AngularJS

Have you ever wondered how cool it would be to have the flexibility and features Angular JS framework provides, inside a Visualforce page in Salesforce platform? Well, wait no more, keep reading to get a perfect solution for this. This blog will introduce you to AngularJS and how to develop Visualforce apps using angular. Every web application requires client-side JavaScript, HTML, and CSS. Without using any server framework like Rails, Node, PHP, etc. the entire application can be built using just the front-end tools and languages.

What is ANGULARJS?

AngularJS is a powerful JavaScript framework for constructing dynamic web applications. It is used to develop mobile and web applications. The good thing about Angular is that it has a set of primed modules to simplify the building of single page applications. Single page application (SPA) is a web application that dynamically renders the data on the current page without reloading the whole page. All the code (JS, HTML, and CSS) is retrieved with a single page load and navigation between pages is performed without reloading the whole page.

Continue reading

Posted in apex develeopment, Apex Development, force.com app development, salesforce development. Tagged with , , , .

Service Level Agreement (SLA) Management for Cases in Salesforce

NewKabilan

In Salesforce, Entitlements are records which ensures that a certain case is acted upon within the given time limits. If not then a certain action is executed.

An entitlement mainly consists of below information:

  1. Entitlement Process
  2. Business Hours

An Entitlement Process declares the start time for the process and the exit criteria for the records, time interval when the criteria is to evaluated and the actions to be taken for scenarios of success, warning and violation.

This process is supported by the concept of Business Hours. Business hours define the window within which the SLA clock for the cases will run and will stop while current time is out of the window mentioned earlier.

Each entitlement process also contains sub-elements which help in evaluating the exit criteria mentioned in the process against each record that enters the entitlement process. These sub-elements are called Milestones. Milestones have different types based on requirement.

  1. No Recurrence
  2. Sequential
  3. Independent

These evaluate the criteria on a certain interval from the start time of the process. Also three types of events are attached to milestones, namely: success, warning and violation. These events are fired respectively on:

  1. When exit criteria evaluates to true
  2. When exit criteria evaluates to false, but the time interval is about to get over.
  3. When exit criteria evaluates to false and the time interval is already over.

These events can perform actions similar to a workflow except sending outbound messages.

Together these elements contribute to SLA process for an organization.

Posted in Case Management, Service Cloud, Service Level Agreement. Tagged with , , .

Integration with Salesforce Lightning External Services

Nowadays there is an API Integration in almost every org. The main purpose of integration with another service is to avoid reinventing the wheel. However, the development effort that is required to integrate with services is a complex and time-consuming venture. It reduces speed to market, but it also saps developer energy that is better spent in the front end, building the features that will really differentiate their app. With Lightning External Services, Salesforce makes this a lot easier and admin friendly.

With external services you can connect to any service that you want to, invoke methods based on the external source via a flow all with the help of an easy-to-use wizard. Declarative tools are used to import API definitions right into Salesforce. Swagger or Interagent-based API definitions can be used to define an external service. Once the definitions have been imported, you can create lightning flows which will invoke actions generated from the API definition schema. Below is a depiction of how external services works.

KaranP1

Here is what is happening in the above image:

Based on provided API schema specification, a schema definition is created that describes the API. Once this is done, named credential is created to authenticate to the service’s endpoint using the URL provided by the external service provider. The endpoint is URL that exposes the web services resources that External Services needs to interact with. Using the named credential and schema definition, external service is registered. External Services imports the definitions into your org and generates Apex actions, which are available immediately in Lightning Flow. While creating a flow, these Apex actions are added into the flow which sends a callout to the endpoint and output is returned based on schema definition.

Schema Definition for your external service:

Schema specification is basically a contract which contains which type of inputs and outputs can be included in the API calls that are made from your external service. Endpoint information and authentication parameters for REST based API service are also included in specs. On the other hand, schema definition is human readable structured data.

Below is the schema definition of a pet store Swagger API.

Pet Store Schema

This schema declares various methods available in the API and the inputs, outputs included in this service. For example, the below snippet contains information about a GET method which is used to get all the inventories by status.

KaranP2

We will be using the Pet Store Schema to illustrate the whole external service in this blog.

Registering an External Service:

Registering an external service involves the below two steps:

1. Named Credential: In order to register an External service, you need to create a named credential first. Named credential is created to authenticate to the service’s endpoint using the URL provided by the external service provider. Create a Named Credential as below in your org:

  • For Label, use SwaggerPet.
  • For URL, use https://petstore.swagger.io
  • Leave other fields as they are and click Save.

2. External Service: In your org, go to setup and search for external service in the quick find box. Create a new external Service and provide the below information in the fields:

  • For name, give ExternalSrv1
  • For Named credential, select SwaggerPet named credential created in previous step
  • In the Service Schema Relative URL field paste “/v2/swagger.json”

KaranP3

 

KaranP4

The generated actions are used in the flows. Below is the list of actions available in the pet store endpoint. We cover getOrderById action in this blog.

KaranP5

External Services in flow:

Apex actions generated from external services can be used in lightning flows. When users run the flows, during runtime external services sends a callout to the service’s endpoint. Create a new flow and drag the Apex Action element. Select “ExternalSrv1_getOrderById__Service”

This action takes an Order id (any integer between 1 and 10) and returns details for that order like Order quantity, pet id, shipping date. Create variables for input and output data and configure the output variables to be stored in the variables as done below. Provide the default value of id such that 1<=id<=10. Set the input values and the output values by associating each with a flow variable. Make sure that the data type of input/output matches the input/output specs mentioned in the schema definition.

KaranP6

 

KaranP7

Connect the Apex Action with the start element.

KaranP8

After completing all these steps, click on debug. Doing this will start the flow in debug mode. On the next screen select Show details of what’s executed and render flow in Lightning runtime and click Run.

KaranP9

Above is the output of “ExternalSrv1_getOrderById__Service” for order Id = 2. Similarly, other Apex Actions can also be invoked using external services and flow without writing any code.

Summary:

External services are a great tool which along with point-and-click automation tool like flows and process builders can be used to integrate any API with Salesforce without writing any code. It reduces the development effort and is very much admin friendly making the process of integration making this process simpler and thus cheaper.

Posted in Agile, Salesforce, Salesforce Challenges, salesforce development, salesforce integration, Salesforce Lightning, Service Cloud. Tagged with , , , , , , .

Amazon Simple Queue Service: Overview

With modern cloud architecture applications are now decoupled into smaller, independent building blocks that are easier to develop, deploy and maintain, but the applications should also deal with connecting system components for the seamless flow of information among these components.

MeghaP1

The message queue is a powerful way of combining application components. Message queue not only simplifies coding of decoupled applications, it also improves performance, reliability and scalability.

Message queues allow different parts of a system to communicate (send and receive messages) and process operations asynchronously by providing a lightweight buffer to store messages, and endpoints temporarily. A message is sent to the queue by a component called a producer and it is stored in the queue until another component called a consumer retrieves the message and processes it.

MeghaP2

This messaging pattern is often called one-to-one, or point-to-point, communications because many producers and consumers can use the queue, but each message is processed only once, by a single consumer.

What is Amazon SQS?

Amazon Simple Queue Service (SQS) is a secure, durable, fully managed and available hosted queue that offers to integrate and decouple distributed software systems and components by allowing transmitting data between distributed application components. SQS eliminates the complexity and burden associated with managing and operating message-oriented middleware and empowers developers to focus on differentiating work. Using SQS, messages can be sent, stored, and received between software components at any volume, without losing messages or requiring other services to be available.

Use cases-Application integration, Allocation of tasks to multiple worker nodes, Decouple live user requests from intensive background work, Decoupling microservices, Batch messages for future processing

An Amazon SQS message has three basic states:

  1. Sent to a queue by a producer
  2. Received from the queue by a consumer and
  3. Deleted from the queue.

Between state 1 and state 2, message is stored in queue and available for use while in between in state 2 and state 3, message is in flight and not available for use for the other consumers.

When a consumer receives and processes a message from a queue, Amazon SQS doesn’t delete the message automatically because SQS is a part of a distributed system, there’s no guarantee that the consumer receives the message (for example, due to a connectivity issue, or due to an issue in the consumer application). This need to be done by the event handler function at the consumer’s side after receiving and processing message. If the message is not deleted during the defined period (called visibility timeout), the message becomes again visible for the other consumers

Benefits of Amazon SQS

  • Administrative overhead– It eliminates administrative overhead and AWS SQS will manage all the operations that are currently underway and infrastructure which is required to scale and run message queuing. There is no requirement to install and configure messaging software as well as maintenance of infrastructure.
  • Security-Amazon SQS can be used to exchange sensitive data between applications using server-side secret writing (SSE) that offers to transmit sensitive data by protecting the contents of messages in queues using keys managed in AWS Key Management Service (AWS KMS). It also allows to control who can send messages to and receive messages from an Amazon SQS queue.
  • Durability– To ensure the safety of your messages, Amazon SQS stores them on multiple servers though only for a limited. (Max.- 14 days)
  • Availability– Amazon SQS stores data on different servers and uses redundant infrastructure to provide highly-concurrent access to messages.
  • Scalability– Amazon SQS allows to dynamically increase read throughput by scaling the number of tasks reading from a queue and requires no pre-provisioning or scale-out of AWS resources.  It easily scales to handle a large volume of messages, without user intervention. SQS buffers requests to transparently handle increase in load.
  • Reliability – It locks your messages during processing so that a message is consumed only once by a single consumer. It also enables multiple producers can send and multiple consumers can receive messages at the same time.
  • Customization – Amazon SQS allows customization in multiple ways, from modifying the queue attributes to integrate with other AWS services in an order to build scalable and more flexible applications. It is compatible with other Amazon Web Services, including Amazon Relational Database Service, Amazon Elastic Compute Cloud and Amazon Simple Storage Service.

 

Amazon SQS queues

Amazon SQS offers following two types of queues:

1. Standard Queue

  • Availability in regions-All regions.
  • Unlimited Throughput–Supports a nearly unlimited number of transactions per second (TPS) per action.
  • At-Least-Once Delivery– It guarantees a message to be delivered at least once. SQS stores copies of messages on multiple servers for high availability and redundancy. On rare occasions, the copy of the message isn’t deleted on any of the servers might be due to unavailability of that server during deletion. This may result in the message delivered more than once.
  • Use Case– It can be used in any scenarios, as long as the application can handle messages which might be out of order or duplicate
  • Best-Effort Ordering– A standard queue makes a best effort to preserve the order of messages, but occasionally, messages might be delivered in an order different from which they were sent

MeghaP3

2. FIFO Queue

  • Availability in regions– US East (Ohio), Asia Pacific (Tokyo), US East (N. Virginia), US West (N. California), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), Canada (Central), US West (Oregon), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), and South America (São Paulo) regions.
  • The name must end with the .fifo
  • High Throughput–FIFO queues support up to 3,000 messages per second with batching. FIFO queues support up to 300 messages per second, per action (SendMessage, ReceiveMessage, or DeleteMessage) without batching.
  • Exactly-Once Processing– A message is delivered once and remains available until a consumer process and deletes it. Duplicates aren’t introduced into the queue.
  • Use Case-Designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates can’t be permitted.
  • First-In-First-Out Delivery– The order of the messages in which messages are sent and received is maintained.

MeghaP4

 

Conclusion

Message queue plays an important role in distributed system to for the communication and coordination among components. As you saw Amazon provide one such service called Amazon SQS which is intended to provide a highly scalable hosted message queue. Apart from benefits it provides like scalability, security, etc, it also provides two types of queue well-suited for different use cases.

Posted in Amazon SQS, Amazon Web Services (AWS), Application Architecture, Message Queue. Tagged with , , , .

Data Skew in Salesforce

Data is getting generated at an explosive pace nowadays and we are running out of storage solutions in order to manage that data. Researches by multiple magazines and portals suggest that 90 percent of the total data in the world was created in the last two years only. This pace continues to increase day by day and we are slowly approaching a state where we would not be able to deal with this data. Salesforce is no exception in this case where organization instances are having large amount of records related to the business process needs. When this plethora of data is not managed properly, we slowly approach a state which is termed as Data Skew.

What is Data Skew?

Data Skew generally refers to a condition where data is distributed unevenly in a large data set. In Salesforce, data skew occurs when more than 10000 child object records are related to a single parent object record, or more than 10000 records of any object are owned by a single Salesforce user. This skewness leads to major performance hits and long running processes which are something that one should avoid.

ShubhamPic1

Types of Data Skew in Salesforce

Three types of data skew exist in Salesforce which are as follows:

  1. Account Skew
  2. Ownership Skew
  3. Lookup Skew

 

1. Account Skew

This type of Salesforce data skew comes into existence when you have large number of child records present under a single account record. This is a very common scenario as it is quite tempting to place all your unwanted or unassigned records under an account named ‘Miscellaneous’ or ‘Unassigned’. As easy and correct as it may look, it can cause major issues such as record locking and sharing performances. This is mainly because certain standard objects like Opportunity and Account, have special data relationships which maintain record access under private sharing models. The problems that you will face in a state of Account skew are:

  • Record Locking: When we are performing an update operation on large number of child records in separate threads, the system locks the child being updated as well as the parent record in order to maintain database integrity for each update. Hence, the parent record might be locked by one thread while some other thread is trying to update the same.
  • Sharing Problems: When we have many child associations with a single parent record, a simple change in sharing setting might lead to a chain of time-consuming processes. Even a meagre change like updating the owner of a parent record may lead to all the sharing rules on the child records being recalculated as well as recalculation of the role hierarchy.

Possible Way for Avoiding Account Skew:

There is only one way to avoid Account skew, that is by distribution of such child records across multiple accounts rather than accumulation on a single record. Having an even distribution of child records across parent accounts fool proofs our organization against performance hits due to account skew.

 

2. Ownership Skew
Ownership data skew is another type of date skew which is very common in Salesforce. This issue occurs when more than 10000 records are owned by a single Salesforce user. Since every record inside Salesforce needs to have an owner, it is quite common in organizations to make a default owner or queue, to which all the unassigned or unused records go to. It is a preferred solution for many organizations in such use case, but little do they know that though this might work for small data sets, this will fail when we are dealing with large data. This increases the probability of performance issues whenever some change to the sharing settings or some similar operation occurs. For example, if a user owns large number of records and he/she is moved around in the role hierarchy, then the sharing rules for all the records owned by that user will be reevaluated and that will result in a long running operation.

Possible Ways for Avoiding Ownership Skew:

  • The best way to avoid this kind of skew will be even distribution of such records among multiple users rather than having a single user for all.
  • If you are compelled to stay put with this solution, then the performance impacts can be reduced by not assigning the user (record owner) to a role.
  • If the owner must have a role, then try to keep the user on top of the role hierarchy. This will avoid the user being passed around the role hierarchy.
  • Make sure that the user is not a member of any public group which is acting as the source for a sharing rule.

 

3. Lookup Skew
Lookup skew is similar to Account skew but can affect a broader number of objects. This happens when large number of records are associated to a single record in the lookup object. Since lookup fields can exist on standard as well as custom fields, lookup skew problem can arise on any custom object in the organization. This happens regardless of whether that lookup exists on a single object or across multiple objects.

Possible Ways for Avoiding Lookup Skew:

  • One method is to distribute the skew across multiple lookup fields. The main cause of the problem is that large number of records are lookup to the same record. By providing additional lookup values to distribute the skew, record lock exceptions can be minimized or even eliminated.
  • Remove unnecessary workflow rules or process builders on the objects in order to reduce the record saving time. Also, make sure that the synchronous apex code and triggers are well optimized.
  • In case the number of lookup values are low and definite, you can use picklist values to represent the lookup values rather than using lookup fields.

 

Conclusion

Data plays a crucial role in the business architecture of large organizations and hence these problems are very common. By taking a few steps while designing our architecture, the data skew problems can be avoided. Having a distributed data is still the best bet for getting rid of these skews and their repercussions.

Posted in Uncategorized. Tagged with , , , .

Sales Forecasting

PranshuPic1

The term ‘Forecasting’ by far is coined by the words ‘fore’ & ‘casting’, meaning predicting in advance… First Thoughts… are we trying to play GOD… Predicting in advance??

Oh no… That’s not it…

The best we do is act logical and have some stats to back that logic which results in a logical explanation to things.

Business Decision making has been a challenge for executives. No doubt, business decisions making needs analytical experience and that gut feeling, but that too is to be backed by some numbers and statistics. Based on the available data, forecasts simply tell what the future trend is going to be. As a result, executives have something to hold on to while taking significant business-related decisions because ‘numbers never lie’. But getting those numbers is not as easy a task as it sounds. These high-quality forecasts take a lot of time and effort due to which the demand for these forecasts is not usually met by the analytical team.

Companies based on their forecasted figures follow a certain educated guess which rarely hits on target, so as they say, ‘no forecasting model is perfect’ and it is because of the number of assumptions one must take into building a certain model. In sales institutions, the concept of demand and supply is driven by sales forecasting. Based on the historical trends and keeping in mind a ton of other factors like seasonality, cyclicity, periodicity etc., forecasting is done to reflect the best possible marketplace conditions. These factors are incorporated to include randomness and uncertainty giving the forecasting model a life-like scenario. To predict profit, one must know the number of products an organization is going to produce and sell in the next year and the price each of that product would fetch to the company. This prediction depends on the economic scenario of the coming ‘N’ number of months which will eventually decide the customer’s behavior and their buying patterns. All this cannot be known accurately beforehand while creating the forecasting plan. These are some of the assumptions one has to take while creating a forecasting model. That’s why forecasts are inaccurate, but this shouldn’t stop you from using them. Learning even a small amount using these forecasting models can give you an edge over your competition. Forecasts are not the means itself to excel in your business, they are just a benchmark for you to follow and reach a certain level.

As Oscar Wilde correctly said, “A good forecaster is not smarter than everyone else, he merely has his ignorance better organized”. There will always be gaps in these forecasting models as these are just a means for us to simplify a complicated problem. The best we can do is to judiciously use these forecasts for our advantage to back our gut instinct while taking a business decision.

Every organization uses one or the other analytical forecasting tool keeping in mind their key business factors. They plan for the next ‘N’ number of months and work accordingly to achieve and go beyond that plan. People like what is simple and that’s what our team of business consultants, technologists and data scientists did for you, they worked on the same problem statement and came up with Delphi – A Sales forecasting and Predictive Analytics application for Salesforce CRM.

PranshuPic2

Delphi intuitively uses collective organizational knowledge to analyze and predict forecasted values of opportunities. The complex algorithms predict when an opportunity in your pipeline will convert into an order and forecast’s the value of your Opportunities, Accounts and Sales Reps.

Delphi works on Machine Learning to produce quality forecasts.

Based on your Organization’s historical data Delphi analyzes the business trend and shows the forecasted output accordingly.

Delphi uses complex Machine Learning algorithms to give you a zero-hassle holistic view of your salesforce org in just a CLICK!! Delphi prioritizes your opportunity by generation opportunity score and recommends Sales Rep assignment for those Opportunities, helping you to allocate your resources and prioritize your work. It also tells when the opportunities are going to be closed and the forecasted amount that they might fetch when they are closed.

 

PranshuPic3

Try DELPHI for free here!!

In all, Delphi is a forecasting tool which ensures that you are not left just only with your gut feeling while taking a crucial business decision. Now you do have a skilled analytical companion with you.

Thanks for reading!!

Posted in Analytics, Forecasting, Monte Carlo Simulation, Prediction, Sales, Salesforce, Salesforce AI, Salesforce Einstein, Salesforce Machine Learning. Tagged with , , , , , , , , .