10 Important Features to Watch out for in Salesforce Admin Spring ’23 Release

The Salesforce Spring ‘23 release is around the corner. This release seems to be packed with plenty of great news. While few of the features that were scheduled for a later release seem to have been preponed, there is other amazing stuff that is visible in the beta version.  Overall the bag of goodies seems to be loaded and the anticipation is high!

As a tradition, Salesforce makes major releases three times a year. These releases are awaited with lot of expectation as they are intended to offer new features and technology updates that the admins can leverage.  I feel that the Spring ‘23 release will definitely help drive productivity and boost security.  While building great user experience (UX) seems to have been factored, I see lots of opportunities for building fantastic apps for both internal and external stakeholders.

This article focuses on the 10 most important features of the Salesforce Spring’23 release for Admins, according to me.

1. Migrate Process Builder to Flow

In Dreamforce 21 Salesforce announced the retirement of Workflow and Process Builder and scheduled the release of migration tools to Flow.  As of now, only migration of workflow is available.

With the release of Spring 23, salesforce is releasing the updated Migrate to Flow tool that will support the migration of Process Builder. You can now use the tool to convert Process Builder processes to Flows.

The updated Migrate to Flow tool can help your transition to Flow Builder. In addition to workflow rules, you can now use the Migrate to Flow tool to convert Process Builder processes into flows. Flows can do everything that processes can do and more.

From Setup, in the Quick Find box, enter Migrate to Flow, and then select Migrate to Flow. On the Migrate to Flow page, select the process that you want to convert into a flow, and then click Migrate to Flow. Then select the criteria that you want to migrate to the flow. After the process is migrated, you can test the flow in Flow Builder. Test the new flow, and if everything works as expected, activate the flow and deactivate the process you converted.

Migrate Process Builder to Flow

2. Build Custom Forecast Pages with the Lightning App Builder

Forecasting in Sales Cloud has seen a good number of updates over the last few releases. In Spring ‘23 you can design and build custom forecast pages using the Lightning App Builder.

As a result of the ease of building flexipages that the

gives, you can build pages using standard and custom components. Your page designs can evolve as fast as your sales processes. You can create and assign different layouts for different users.

Lightning App Builder

3. Collaborate on Complex Deals with Opportunity Product Splits

In complex business transactions or negotiations, generally there is no single person responsible for the closure of the deal, as it involves an entire team. Splitting the opportunity allows one to track credit across multiple team members. Earlier such splits were possible with Opportunity only. Now with the Spring’23 release, the split at Product level will also be available.

4. Importing Contacts and Leads with a Guided Experience

With new Guided Experience, when users select to import contacts or leads, they are now presented with multiple options to import data, depending on their assigned permissions.

Importing Contacts and Leads with a Guided Experience

The new wizard provides a simple interface that allows the steps to import a CSV file

5. Dynamic Forms for Leads AND Cases

The Salesforce Dynamic Forms is an early release by salesforce. With Dynamic Forms, now case and lead record pages can be configured to make these more robust. Earlier this capability was available only for account, contact and opportunity record pages.

With the Spring ’23 release, Dynamic Related Lists will include the ability of “View All”.  This link will enable users to see the complete list of related records.

7. Dynamic Actions for Standard Objects

Now Dynamic Actions are available for all standard objects. Earlier it was available only for Account, Case, Contact, Lead and Opportunity.

Dynamic Actions will enable to create intuitive, responsive, and uncluttered pages which display only the actions your users need to see based on the criteria you specify.

Instead of scanning an endless list of actions, users will be presented with a simple choice, relevant to their roles and profiles, or when a record meets some criteria.

8. Track Field History for Activities

Now up to six fields for Task and Event can be tracked when Field history tracking for activities: is turned on.

Field history tracking for activities

9. Picklist Updates

Picklist fields got lot of new features added, like:

  • Clean Up Inactive Picklist Values
  • Bulk Manage Picklist Values
  • Limit the Number of Inactive Picklist Values (Release Update)
  • There are also two new standard picklist fields available on Leads, Contacts, and Person Accounts. Gender Identity and Pronouns are now included as optional fields.
  • Capture Inclusive Data with Gender Identity and Pronouns Fields.

10. Reports and Dashboards

Reports and Dashboards have got many exciting updates.

  • Creating Personalized Report Filters
    You can now create dynamic report filter based on the user’s profile so that the users view records specific to them.
  • Subscribe to More Reports and Dashboards
    In Unlimited Edition org, now Users can subscribe up to 15 reports and 15 Dashboards. Earlier it was restricted to 7.
  • Stay Informed on Dashboard and Report Subscriptions
    You can now create a custom report type to see which reports, dashboards, or other analytic assets users have subscribed.
  • Stay Organized by Adding Reports and Dashboards to Collections
    Now you can use collections to organize the reports and dashboards even if they exist in multiple folders. You can also pin important collections to your home page, hide irrelevant collections, and share collections with others.
  • Focus Your View with More Dashboard Filters
    You can refine and target the dashboard data with additional filters on Lightning dashboards. Now no need to maintaining separate versions of the same dashboard for different business units and regions with only three filters. This is in beta only.

Conclusion

Salesforce Spring’23 release, I feel, will certainly not disappoint the administrators as a lot of ‘most awaited’ features seem to have made it to this release. Few features that were seen in the beta came as a pleasant surprise. I would definitely encourage you to read the release notes so that you can identify the features that are important to you.

We, at InfoVision, have a dedicated Salesforce Center of Excellence that focuses on innovation – through which we develop new salesforce competencies.  We leverage lots of tools, processes and accelerators to build industry-specific use cases that pertain to global standards.  We therefore follow each and every release that Salesforce makes with lots of interest and curiosity.  The releases create opportunities for us to innovate and find differentiating ways by which we can solve the unmet needs of our customers.

I am happy to have more in-depth discussions on any aspect of Salesforce with those of you who are interested.

Building customer loyalty in retail

Loyal customers in retail make more repeat purchases, shop more and refer more. Organizations in the US, on average, spend 4 to 6 times more in acquiring new customers as compared to keeping old customers. From a business perspective, especially in today’s competitive landscape where customers have multiple options to choose from, customer retention is as important as customer acquisition.  This is why customer loyalty needs to be carefully studied and planned in order to maximize the value of a loyal customer. Besides, a lot of retail industry studies conclude that only the quality of product does not suffice for the modern buyers.  Customer service and personalized recommendations are a definite plus that can propel the pendulum swing in a given direction.

Loyalty is Valuable Even When Partial

A returning customer who repeatedly prefers to buy from one brand over another is considered as a loyal customer. Retail loyalty is different from other Brand loyalty in terms of frequency of purchase, range of products and fierce competition. Due to these reasons 100% customer loyalty is quite unlikely in retail. This does not diminish the value of a loyal customer for a retailer. Several factors play a role in influencing the customer to favor a particular brand. Convenience while shopping, satisfaction with the range of products, attractive offers and familiarity with the brand, are some such factors.

Customer Loyalty Programs

Building loyalty programs is an effective way to nudge customers to prefer your brand. 50% of US consumers use a loyalty card or app for the purchase of fuel. Around 71% of retailers offer some kind of loyalty program.  For loyalty programs to really work without increasing customer friction, they need to be contextual and timely. Else, they run the risk of backfiring. Advanced technologies and innovative solutions for loyalty programs help retailers with deeper customer insights in real-time and thus make hyper-personalization possible.

According to a Gartner Insight, “customer loyalty can be increased by performing value enhancement activities that leaves customer feeling like they can use the product better and are more confident in their purchase decision.”

Challenges with the Traditional Approach to Loyalty Programs

While loyalty programs are not a new concept, the conventional methods haven’t always made any significant impact on the retailers or their customers. Worse still, some of them backfire and are seen as a nuisance by some customers. Here are some challenges that modern, technology-backed loyalty programs should ensure that they get right.

  • Integration with existing systems
    As retailers try to move towards omnichannel presence and digitally transform every aspect of their operations, integrating loyalty programs with these systems has not been straightforward. Without integrating a loyalty program, it is very difficult to harness the full potential of loyal customers. Retailers therefore need to carefully select their loyalty platform. Whether to use an off-the-shelf solution from a provider or develop one in-house.
  • Analytics
    Insights on customer behavior and their response to promotional offers are needed to understand how well the loyalty program is being received. While retailers may have this data, it is usually scattered and not systematic enough to run analytics. Data insights are also crucial to ascertain if the value created for the retailer is greater than the value delivered to the customer. Any good loyalty program solution should have this capability built in.
  • Impersonal OffersGeneric loyalty programs are seldom relevant. Personalized rewards are much more meaningful to customers. There are several brands that fail to leverage their customer data effectively to bring the desired personalization in offers and promotions. Offering a promotion on coffee to a customer who usually purchases tea is what needs to be avoided.
  • Transaction-only Focus
    Most traditional approaches have a narrow view of customer loyalty which is only linked to purchases. However, every time a customer writes a review or refers your brand to others, they are displaying loyalty and can be treated as a trigger for rewards.
  • Not Simple Enough
    With everything that goes on, the last thing customers want is hard-to-understand and difficult-to-keep-track-of loyalty programs. Similarly, if the process to redeem points is not simple enough, customers may not bother themselves and might actually be put off. Quite the opposite of what the primary intent of any loyalty program is. When technology is being used to make the buying experience as simple as possible, why shouldn’t the same principle apply to loyalty programs?
  • Short-term Redundant Offers
    Some users may like to collect rewards in the form of redeemable points, while some may prefer membership coupons or event passes. Repetitive offers can become redundant and irrelevant. Similarly, short-term offers do not build loyalty in the long term. A good mix of offers that cover a wider range and period is more appreciated by customers.

Digital Wallets and Shared Loyalty Programs

Retailers with mature loyalty programs are now looking to offer shared loyalty programs in collaboration with other brands. This proves to be more cost-efficient for the brands and more beneficial to the users. Also known as coalition loyalty, this is done with the help of digital wallets and extensive partnerships with unrelated brands. For example, retailers may partner with fuel providers to extend the scope of their rewards and further strengthen customer loyalty. Brands with different purchase cycles also stand to gain from each other’s customer loyalty. Digital wallets or mobile wallets have given rise to a new kind of loyalty economy, where consumers can track their reward points from various brands in one location and actually use them at POS counters. With everything now being on mobile, users no longer need to keep another physical loyalty card handy.

Gamification is another new way to build engaging loyalty programs. It helps in engaging the customers, creating a sense of community or accomplishment and generating excitement for the brand.  Such programs need to be highly creative and leverage the latest technologies.

The bottom line of any loyalty program is to generate profits and not to become a cost center. Knowing your customer preferences and having that data handy is the only way to create personalized and targeted promotions. Identifying the right channel (POS, POPs, SMS, in-app), the right offer and the right time may look simple but has clearly been a challenge for retailers. Partnering with an experienced loyalty solution provider like InfoVision can help to overcome most of these challenges.  The team of technology and retail experts at InfoVision has successfully implemented a combination mobile fuel payment system, digital wallet, mobile checkout and customer rewards system for a leading multinational retailer.

Want to talk to our expert?  Please write to us at info@infovision.com

Vector-based Search: An Efficient Technique for Unstructured Duplicate Data Detection

Organizations today are driven by a competitive landscape to make insights-led decisions at speed and scale.  And, data is at the core here.  Capturing, storing and analyzing large volumes of data in a proper way has become a business necessity. Analyst firm IDC predicts that the global creation and replication of data will reach 181 zettabytes in 2025. However, almost 80% of that data will be unstructured and much less will be analyzed and stored.

A single user or organization may collect large amounts of data in multiple formats such as images, documents, audio files, and so on, that consume significantly large storage space. Most storage applications use a predefined folder structure and give a unique file name to all data that is stored. This unique file name system of applications enables the same file to exist under different names. This makes it rather difficult to identify duplicate data without checking its content.

This blog focuses on the challenges associated with data duplication in the database and the detection of the same in unstructured folder directories.

The complications of unstructured data

Unstructured data is defined as data that lacks a predefined data model or that cannot be stored in relational databases. According to a report, 80% to 90% of the world’s data is unstructured, the majority of which has been created in the last couple of years. The unstructured data is growing at a rate of 55%-65% every year. Unstructured data may contain large amounts of duplicate data, limiting enterprises’ ability to analyze their data.

Here are a few issues with unstructured data (duplicate data in particular) and its impact on any system and its efficiency:

  • Increase in storage requirements: Higher the duplicate data, more the storage requirements. This increases the operating costs for applications substantially.
  • Large number of data files: This significantly increases the response time for every type of search function.
  • Delays in migration: Larger duration of time is required for migrating data from one storage facility to another.
  • Difficulty in eliminating duplicates: It becomes more difficult to remove duplicate files when the scalability of the system increases.

Redundant data creates disarray in the system. For that reason, it becomes imperative for organizations to identify and eliminate duplicate files. A clean database free of duplicate data avoids unnecessary computation requirements and improves efficiency.

Challenges in duplicate record detection

Detecting duplicate files by search functions using file characteristics like name, size, type and others, may seem to be the easiest method. However, it might not prove to be the most efficient method, especially if the data is on large scale. Here’s why:

  • Searching with file names: Most of the applications use unique file names to store media files. This makes the search difficult because the same file can be under different names. Identification of duplicate data is not possible unless the content is examined.
  • Search based on content: As searching with file names isn’t suitable for applications, a search based on content appears to be the next option. However, if we are dealing with a large document or pdf with multiple pages, this is not a feasible solution either. It will not only have high latency but will also be a computationally expensive task.
  • Search based on types and formats: Media files can be of different types like images, video, audio and so on. Each type of media file can be stored in multiple formats. For instance, an audio file can be saved as .wav, .mp3, AAC or others. The file structure and encoding for each format will be different, hence making the detection of duplicate files difficult.

The proposed solution

A suitable solution to detect duplicate files must address the complications associated with dealing with large volumes of data, multiple media formats and low latency. If each file were to be converted into multi-dimensional vectors and fed as inputs to the nearest neighbor’s algorithm, one would get the top 5-10 possible duplicate copies of the file. Once converted into vector files, duplicate data can be easily identified as the difference in distance of the respective dimensions of duplicate files will be almost indistinguishable.

Here’s how different types of files can be converted to multi-dimensional vectors.

  1. Image files: Images are multi-dimensional arrays that have multiple pixels. Each pixel has three values – red, green and blue. When passed through a pre-trained convolution neural network, the images or a video frame get converted into vectors. A convolution neural network is a deep learning architecture, specifically designed to work with image inputs. Many standard architectures like VGG16, ResNet, MobileNet, AlexNet and others are proven to be very efficient in prediction based on inputs. These architectures are trained on large standard datasets like ImageNet with classification layers at the top.

    Represented below is a very simple sample convolution neural network for reference:
  1. The required images are fed into multiple convolution layers as inputs. Convolution layers are trained to identify underlying patterns from image inputs. Each convolution layer has its own set of filters that multiplies the pixels of the input image. The pooling layer takes the average of the total pixels and reduces the image size as it passes on to the next step in the network. The flatten layer collects the input from the pooling layers and gives out the vector form for the images.
     
  2. Text Files: To convert the text files into vectors, the words that comprise that particular file are used. Words are nothing but a combination of ASCII codes of characters. However, there is no representation available for a complete word. In such cases, pre-trained word vectors such as Word2Vec or Glove vectors can be used. Pre-trained word vectors are obtained after training a deep-learning model such as the skip-gram model on large text data. More details on this skip-gram model are available in the TensorFlow documentation. The output vector dimension will change with respect to the chosen pre-trained word representation model.

    To convert a text document with multiple words where the number of words is not fixed, Average Word2Vec representation can be used on the complete document. The calculation of Average Word2Vec vectors is done using the formula below:
  1. This solution can be made more feasible by adding a 36-dimensional (26 alphabets + 10 digits) vector as an extension to the final representation of the text file. This becomes efficient in cases when two text files have the same characters but in different sequences.
  2. PDF files: PDF files usually contain texts, images or a mix of both. Therefore, to make a more inclusive solution, vector conversion for both texts and images are programmed in. The approaches discussed earlier to convert text and images into vectors is combined here.

    First, to convert the text into a vector, it needs to be extracted from the PDF file and then passed through a similar pipeline as discussed before. Similarly, to convert images to vectors, each page in a PDF is considered as an image and is passed through a pre-trained convolution neural network as discussed before. A PDF file can have multiple pages and to include this aspect, the average of all page vectors is taken to get the final representation.
     
  3. Audio files: Audio files stored in .wav or .mp3 formats are sampled values of audio levels. Audio signals are analogue and to store them digitally, it undergoes the process of sampling. Sampling is a process where an analogue-to-digital converter captures sound waves from audio files at regular intervals of time (known as samples) and stores them. The sampling rate may vary according to the applications used. Therefore, while converting audio files to vectors, a fixed resampling is used to get standard sampling rates.

    Another difficulty while converting audio files into vectors is that the lengths of the audio files may vary. To solve this, a fixed-length vector with padding (adding zeros at the end or start) or trimming (trimming the vector to a fixed length) can be added, depending on the audio length.

Finding duplicates with vector representations

With vector representations for all types of files, it now becomes easier to find duplicate data based on the difference in distance of respective dimensions. As previously stated, detection by comparing each vector may not be an efficient method as it can increase latency. Therefore, a more efficient method with lower latency is to use the nearest neighbors algorithm.

This algorithm takes vectors as inputs and computes the Euclidean distance or cosine distance between the respective dimensions of all the possible vectors. The files with the shortest distance between their respective vector dimensions are likely duplicates.

Finding Euclidean distance may take longer (O(n^2) latency computation), but the optimized Sci-Kit Learn implementation with the integration of KDTrees reduces the computational time (brings down latency by O(n(k+log(n))). Note: k is the dimension of the input vector.

Please note that different processing pipelines must be used when converting images, texts, PDFs, and audio files into vectors. This is to ensure that the scale of these vectors is the same. Since the nearest neighbour’s algorithm is a distance-based algorithm, we may not get correct results if the vectors are in different scales. For instance, one vector’s values can vary from 0 to 1 while another vector’s values can vary from 100-200. In this case, irrespective of the distance, the second vector will take precedence.

The nearest neighbour algorithm also tells us how similar the files are (lesser the distance between dimensions, more similar the files are). Each file vector has to be scaled within a standard range to have a uniform distance measure. This can be done by using a pre-processing technique such as StandardScaler from Sci-kit Learn. After the pre-processing, the nearest neighbour algorithm can be applied to get the nearest vector for each file. Since the Euclidean distances are calculated along with the nearest neighbour vectors, a distance threshold can be applied to filter out less probable duplicate data.

Conclusion

Data duplication in any system will impact its performance and demand unnecessary infrastructure requirements. Duplicate record detection based on file characteristics is not a recommended method as it might require an examination of the content for accurate results. Vector-based search is a more efficient technique for duplicate record detection. Successful implementation of this methodology can help identify the most and least probable duplicate files in unstructured data storage systems.