Building customer loyalty in retail

Loyal customers in retail make more repeat purchases, shop more and refer more. Organizations in the US, on average, spend 4 to 6 times more in acquiring new customers as compared to keeping old customers. From a business perspective, especially in today’s competitive landscape where customers have multiple options to choose from, customer retention is as important as customer acquisition.  This is why customer loyalty needs to be carefully studied and planned in order to maximize the value of a loyal customer. Besides, a lot of retail industry studies conclude that only the quality of product does not suffice for the modern buyers.  Customer service and personalized recommendations are a definite plus that can propel the pendulum swing in a given direction.

Loyalty is Valuable Even When Partial

A returning customer who repeatedly prefers to buy from one brand over another is considered as a loyal customer. Retail loyalty is different from other Brand loyalty in terms of frequency of purchase, range of products and fierce competition. Due to these reasons 100% customer loyalty is quite unlikely in retail. This does not diminish the value of a loyal customer for a retailer. Several factors play a role in influencing the customer to favor a particular brand. Convenience while shopping, satisfaction with the range of products, attractive offers and familiarity with the brand, are some such factors.

Customer Loyalty Programs

Building loyalty programs is an effective way to nudge customers to prefer your brand. 50% of US consumers use a loyalty card or app for the purchase of fuel. Around 71% of retailers offer some kind of loyalty program.  For loyalty programs to really work without increasing customer friction, they need to be contextual and timely. Else, they run the risk of backfiring. Advanced technologies and innovative solutions for loyalty programs help retailers with deeper customer insights in real-time and thus make hyper-personalization possible.

According to a Gartner Insight, “customer loyalty can be increased by performing value enhancement activities that leaves customer feeling like they can use the product better and are more confident in their purchase decision.”

Challenges with the Traditional Approach to Loyalty Programs

While loyalty programs are not a new concept, the conventional methods haven’t always made any significant impact on the retailers or their customers. Worse still, some of them backfire and are seen as a nuisance by some customers. Here are some challenges that modern, technology-backed loyalty programs should ensure that they get right.

  • Integration with existing systems
    As retailers try to move towards omnichannel presence and digitally transform every aspect of their operations, integrating loyalty programs with these systems has not been straightforward. Without integrating a loyalty program, it is very difficult to harness the full potential of loyal customers. Retailers therefore need to carefully select their loyalty platform. Whether to use an off-the-shelf solution from a provider or develop one in-house.
  • Analytics
    Insights on customer behavior and their response to promotional offers are needed to understand how well the loyalty program is being received. While retailers may have this data, it is usually scattered and not systematic enough to run analytics. Data insights are also crucial to ascertain if the value created for the retailer is greater than the value delivered to the customer. Any good loyalty program solution should have this capability built in.
  • Impersonal OffersGeneric loyalty programs are seldom relevant. Personalized rewards are much more meaningful to customers. There are several brands that fail to leverage their customer data effectively to bring the desired personalization in offers and promotions. Offering a promotion on coffee to a customer who usually purchases tea is what needs to be avoided.
  • Transaction-only Focus
    Most traditional approaches have a narrow view of customer loyalty which is only linked to purchases. However, every time a customer writes a review or refers your brand to others, they are displaying loyalty and can be treated as a trigger for rewards.
  • Not Simple Enough
    With everything that goes on, the last thing customers want is hard-to-understand and difficult-to-keep-track-of loyalty programs. Similarly, if the process to redeem points is not simple enough, customers may not bother themselves and might actually be put off. Quite the opposite of what the primary intent of any loyalty program is. When technology is being used to make the buying experience as simple as possible, why shouldn’t the same principle apply to loyalty programs?
  • Short-term Redundant Offers
    Some users may like to collect rewards in the form of redeemable points, while some may prefer membership coupons or event passes. Repetitive offers can become redundant and irrelevant. Similarly, short-term offers do not build loyalty in the long term. A good mix of offers that cover a wider range and period is more appreciated by customers.

Digital Wallets and Shared Loyalty Programs

Retailers with mature loyalty programs are now looking to offer shared loyalty programs in collaboration with other brands. This proves to be more cost-efficient for the brands and more beneficial to the users. Also known as coalition loyalty, this is done with the help of digital wallets and extensive partnerships with unrelated brands. For example, retailers may partner with fuel providers to extend the scope of their rewards and further strengthen customer loyalty. Brands with different purchase cycles also stand to gain from each other’s customer loyalty. Digital wallets or mobile wallets have given rise to a new kind of loyalty economy, where consumers can track their reward points from various brands in one location and actually use them at POS counters. With everything now being on mobile, users no longer need to keep another physical loyalty card handy.

Gamification is another new way to build engaging loyalty programs. It helps in engaging the customers, creating a sense of community or accomplishment and generating excitement for the brand.  Such programs need to be highly creative and leverage the latest technologies.

The bottom line of any loyalty program is to generate profits and not to become a cost center. Knowing your customer preferences and having that data handy is the only way to create personalized and targeted promotions. Identifying the right channel (POS, POPs, SMS, in-app), the right offer and the right time may look simple but has clearly been a challenge for retailers. Partnering with an experienced loyalty solution provider like InfoVision can help to overcome most of these challenges.  The team of technology and retail experts at InfoVision has successfully implemented a combination mobile fuel payment system, digital wallet, mobile checkout and customer rewards system for a leading multinational retailer.

Want to talk to our expert?  Please write to us at info@infovision.com

Vector-based Search: An Efficient Technique for Unstructured Duplicate Data Detection

Organizations today are driven by a competitive landscape to make insights-led decisions at speed and scale.  And, data is at the core here.  Capturing, storing and analyzing large volumes of data in a proper way has become a business necessity. Analyst firm IDC predicts that the global creation and replication of data will reach 181 zettabytes in 2025. However, almost 80% of that data will be unstructured and much less will be analyzed and stored.

A single user or organization may collect large amounts of data in multiple formats such as images, documents, audio files, and so on, that consume significantly large storage space. Most storage applications use a predefined folder structure and give a unique file name to all data that is stored. This unique file name system of applications enables the same file to exist under different names. This makes it rather difficult to identify duplicate data without checking its content.

This blog focuses on the challenges associated with data duplication in the database and the detection of the same in unstructured folder directories.

The complications of unstructured data

Unstructured data is defined as data that lacks a predefined data model or that cannot be stored in relational databases. According to a report, 80% to 90% of the world’s data is unstructured, the majority of which has been created in the last couple of years. The unstructured data is growing at a rate of 55%-65% every year. Unstructured data may contain large amounts of duplicate data, limiting enterprises’ ability to analyze their data.

Here are a few issues with unstructured data (duplicate data in particular) and its impact on any system and its efficiency:

  • Increase in storage requirements: Higher the duplicate data, more the storage requirements. This increases the operating costs for applications substantially.
  • Large number of data files: This significantly increases the response time for every type of search function.
  • Delays in migration: Larger duration of time is required for migrating data from one storage facility to another.
  • Difficulty in eliminating duplicates: It becomes more difficult to remove duplicate files when the scalability of the system increases.

Redundant data creates disarray in the system. For that reason, it becomes imperative for organizations to identify and eliminate duplicate files. A clean database free of duplicate data avoids unnecessary computation requirements and improves efficiency.

Challenges in duplicate record detection

Detecting duplicate files by search functions using file characteristics like name, size, type and others, may seem to be the easiest method. However, it might not prove to be the most efficient method, especially if the data is on large scale. Here’s why:

  • Searching with file names: Most of the applications use unique file names to store media files. This makes the search difficult because the same file can be under different names. Identification of duplicate data is not possible unless the content is examined.
  • Search based on content: As searching with file names isn’t suitable for applications, a search based on content appears to be the next option. However, if we are dealing with a large document or pdf with multiple pages, this is not a feasible solution either. It will not only have high latency but will also be a computationally expensive task.
  • Search based on types and formats: Media files can be of different types like images, video, audio and so on. Each type of media file can be stored in multiple formats. For instance, an audio file can be saved as .wav, .mp3, AAC or others. The file structure and encoding for each format will be different, hence making the detection of duplicate files difficult.

The proposed solution

A suitable solution to detect duplicate files must address the complications associated with dealing with large volumes of data, multiple media formats and low latency. If each file were to be converted into multi-dimensional vectors and fed as inputs to the nearest neighbor’s algorithm, one would get the top 5-10 possible duplicate copies of the file. Once converted into vector files, duplicate data can be easily identified as the difference in distance of the respective dimensions of duplicate files will be almost indistinguishable.

Here’s how different types of files can be converted to multi-dimensional vectors.

  1. Image files: Images are multi-dimensional arrays that have multiple pixels. Each pixel has three values – red, green and blue. When passed through a pre-trained convolution neural network, the images or a video frame get converted into vectors. A convolution neural network is a deep learning architecture, specifically designed to work with image inputs. Many standard architectures like VGG16, ResNet, MobileNet, AlexNet and others are proven to be very efficient in prediction based on inputs. These architectures are trained on large standard datasets like ImageNet with classification layers at the top.

    Represented below is a very simple sample convolution neural network for reference:
  1. The required images are fed into multiple convolution layers as inputs. Convolution layers are trained to identify underlying patterns from image inputs. Each convolution layer has its own set of filters that multiplies the pixels of the input image. The pooling layer takes the average of the total pixels and reduces the image size as it passes on to the next step in the network. The flatten layer collects the input from the pooling layers and gives out the vector form for the images.
     
  2. Text Files: To convert the text files into vectors, the words that comprise that particular file are used. Words are nothing but a combination of ASCII codes of characters. However, there is no representation available for a complete word. In such cases, pre-trained word vectors such as Word2Vec or Glove vectors can be used. Pre-trained word vectors are obtained after training a deep-learning model such as the skip-gram model on large text data. More details on this skip-gram model are available in the TensorFlow documentation. The output vector dimension will change with respect to the chosen pre-trained word representation model.

    To convert a text document with multiple words where the number of words is not fixed, Average Word2Vec representation can be used on the complete document. The calculation of Average Word2Vec vectors is done using the formula below:
  1. This solution can be made more feasible by adding a 36-dimensional (26 alphabets + 10 digits) vector as an extension to the final representation of the text file. This becomes efficient in cases when two text files have the same characters but in different sequences.
  2. PDF files: PDF files usually contain texts, images or a mix of both. Therefore, to make a more inclusive solution, vector conversion for both texts and images are programmed in. The approaches discussed earlier to convert text and images into vectors is combined here.

    First, to convert the text into a vector, it needs to be extracted from the PDF file and then passed through a similar pipeline as discussed before. Similarly, to convert images to vectors, each page in a PDF is considered as an image and is passed through a pre-trained convolution neural network as discussed before. A PDF file can have multiple pages and to include this aspect, the average of all page vectors is taken to get the final representation.
     
  3. Audio files: Audio files stored in .wav or .mp3 formats are sampled values of audio levels. Audio signals are analogue and to store them digitally, it undergoes the process of sampling. Sampling is a process where an analogue-to-digital converter captures sound waves from audio files at regular intervals of time (known as samples) and stores them. The sampling rate may vary according to the applications used. Therefore, while converting audio files to vectors, a fixed resampling is used to get standard sampling rates.

    Another difficulty while converting audio files into vectors is that the lengths of the audio files may vary. To solve this, a fixed-length vector with padding (adding zeros at the end or start) or trimming (trimming the vector to a fixed length) can be added, depending on the audio length.

Finding duplicates with vector representations

With vector representations for all types of files, it now becomes easier to find duplicate data based on the difference in distance of respective dimensions. As previously stated, detection by comparing each vector may not be an efficient method as it can increase latency. Therefore, a more efficient method with lower latency is to use the nearest neighbors algorithm.

This algorithm takes vectors as inputs and computes the Euclidean distance or cosine distance between the respective dimensions of all the possible vectors. The files with the shortest distance between their respective vector dimensions are likely duplicates.

Finding Euclidean distance may take longer (O(n^2) latency computation), but the optimized Sci-Kit Learn implementation with the integration of KDTrees reduces the computational time (brings down latency by O(n(k+log(n))). Note: k is the dimension of the input vector.

Please note that different processing pipelines must be used when converting images, texts, PDFs, and audio files into vectors. This is to ensure that the scale of these vectors is the same. Since the nearest neighbour’s algorithm is a distance-based algorithm, we may not get correct results if the vectors are in different scales. For instance, one vector’s values can vary from 0 to 1 while another vector’s values can vary from 100-200. In this case, irrespective of the distance, the second vector will take precedence.

The nearest neighbour algorithm also tells us how similar the files are (lesser the distance between dimensions, more similar the files are). Each file vector has to be scaled within a standard range to have a uniform distance measure. This can be done by using a pre-processing technique such as StandardScaler from Sci-kit Learn. After the pre-processing, the nearest neighbour algorithm can be applied to get the nearest vector for each file. Since the Euclidean distances are calculated along with the nearest neighbour vectors, a distance threshold can be applied to filter out less probable duplicate data.

Conclusion

Data duplication in any system will impact its performance and demand unnecessary infrastructure requirements. Duplicate record detection based on file characteristics is not a recommended method as it might require an examination of the content for accurate results. Vector-based search is a more efficient technique for duplicate record detection. Successful implementation of this methodology can help identify the most and least probable duplicate files in unstructured data storage systems.

Eight Quick Tips to Choose the Best Public Cloud Provider

Cloud adoption is all pervasive today – across industries, businesses and geographies.

One of the drivers for this high rate of adoption is the fact that cloud services support digital innovation by providing scalable and cost-effective solutions for software infrastructure, storage, security, connectivity and other specialised services. Cloud migration for enterprises of any size is therefore not a question of ‘if’, but of ‘when’ and more specifically, ‘how’.

Cloud Migration: The Big Question

Whether your organization is thinking of migrating its on-premise setup to the cloud or you are a consulting company that needs to offer a recommendation to the client; the biggest and the most fundamental challenge is to identify the right cloud service provider. The obvious top three names in this space are Amazon Web Services (AWS), Microsoft Azure, and Google Compute Engine(GCP). There are others too, but these three are the most mature and hence the most sought after providers. An important aspect to consider here is that not every provider suits every kind of requirement.  While the spectrum of offerings may be wide, yet the choice has to pivot around the business needs that drive the cloud migration.

There are several instances where the companies who migrated their workload to the cloud realized much later in the journey that the chosen service provider was not the right one. Some of them have had to go back to square one and initiate migrating from one cloud to another. A deeper knowledge and understanding of cloud service providers therefore is required to make a wise decision. If the expertise on this subject is not available in-house, consulting cloud specialists for their expert opinion and recommendation is certainly a good idea.

The cloud system infrastructure services (IaaS) segment is forecast to grow by 30.5 percent in 2023 compared to 2022. The public cloud services market as a whole is expected to grow by 21.3 percent, with only cloud business process services (BPaaS) experiencing single digit annual growth rates.
Source Statista

How to Select a Cloud Service Provider?

Now that we have established the need to choose wisely, let us look at the aspects that you should consider before selecting a cloud provider.   I bring you a checklist of eight.

1. On-premise Workload Environment

A thorough research and comprehensive knowledge of the on-premise workload environment that needs to be migrated to the cloud is fundamental to making the right decision. The workload environment includes the operating systems, software, network protocols and many other aspects. Below is a reference list of some elements that comprise this environment:

  • Operating systems (Windows or Linux)
  • Open source or license software requirements
  • Public-facing or internal applications
  • Application network port requirements
  • Virtual machine hardening
  • Dependencies on other applications
  • Internal application releases
  • Security concerns
  • Application auto scaling capabilities.

2. The Goal of the Migration

After identifying the workloads to be migrated, the next important fix is to comprehend, ‘what do we want to achieve from this migration?’. The goal of the migration needs to be clearly defined and communicated correctly within the team and the enterprise as a whole. Everyone in the organization should know the benefits and the reasons why they are migrating the workload to the cloud. Below are a few reasons and benefits that organizations could consider:

  • Accessibility for their customers
  • On-premises hardware expiry
  • Acquisition of additional hardware
  • Organization expansion and subsequent auto-scaling
  • Cost optimization
  • 24 hours’ availability.

3. Motivated and inspired Team

Involving a well-rounded team in the migration plan and taking suggestions from them is often overlooked. Undoubtedly, an inspired and motivated team can be the secret ingredient and a big asset in the long migration journey. Here’s how you can inspire and motivate your team,

  • Involve them in the migration discussions right from the start
  • Take suggestions from them
  • Discuss the goals, objectives and challenges
  • Give appreciation to those who deserve
  • Provide the required training.

4. Costing

It is important to know the cost benefits of each provider before arriving at the final decision.  Compare the costs for the long term in a systematic and objective way. Here are some pointers on how to go about it:

  • Create a Technical Oversight Committee (TOC) for 3 years to monitor and predict expenses
  • Understand the cost for reserved, spot, and on-demand instances for services
  • Compare the major expense component
  • Do a region-wise comparison
  • Compare the cost with Rehost, Refactor, Re-Architect, and Re-build strategy approaches
  • Ask for discounts on specific types of resources that you would be deploying more and the regions where your workloads will be deployed
  • Ask to omit or reduce the software and VM license cost for development and testing environments
  • Understand the cost of hiring the resources for different cloud providers.

5. Perform Proof of Concepts  

Select an appropriate workload to perform POC (Proof of Concepts) on all the cloud service providers you are considering. It will give you a correct overview of your environment and help you to visualize the actual results. You can also get an idea of whether your solutions will work on the cloud or not. Below are the points you can consider after POCs,

  • Feasibility of the solution
  • Architectural approach
  • Cloud readiness and easiness
  • DevOps and Automation
  • Cloud running cost
  • Software availability
  • Performance on each cloud
  • Scalability of each cloud.

6. Cloud Provider’s Resources Availability

Till this step you may have done everything right. From setting up the goal to comparing costs, motivating your team, and completing the PoCs.  You may have even narrowed down your search to a chosen cloud service provider. However before starting the migration process, you will need to recruit specialist resources. That is when you may realize that the skillset for this provider is not available in the market and now your entire plan is at a risk. This problem is usually faced by service provider companies. Finding the correct candidate for your cloud migration project may feel like treasure hunting. If the specific cloud technology is rare, your company will need to spend more time and money to find the correct candidates. The availability of technical resources in the market for cloud service providers is therefore a critical aspect to factor. Some points to note,

  • AWS is a more mature cloud service provider and so AWS skilled resources are available easily.
  • Azure is now slowly gaining traction. So resources with Azure skillset are also available but not as easily.
  • GCP is still in its nascent stage. Technical skills for the GCP cloud are not easy to find.
  • Oracle, IBM and Alibaba are much behind in public cloud penetration and hence finding relevant resources will be very tough.

It goes without saying that cost of hiring resources will go up if their availability is low.

7. Community and Cloud Service Provider’s Support

Before selecting the cloud providers check their service plans, service support time and methods. Do make note of any add-on facilities. Also, check the community support available for the providers. Below are some points to consider along these lines,

  • Cost of support
  • Types of support (Email, Phone, Chat Video conference, etc.)
  • SLA time
  • Marketplace resource support
  • Service support.

8. Software Availability

If you are using Rehost migration approach and you have some specific requirements for the software and its version, then it is worth checking its availability in the cloud service marketplace. Legacy applications usually face this issue of availability in the cloud environment.

Gartner predicts that by 2026, public cloud spending will be more than 45% of all enterprise IT spending. It was below 17% in 2021. The future is definitely on the cloud. For a smooth ride into the future, find a public cloud provider that works best for your organization.

How blockchain can save billions for the media industry

The global entertainment and media industry revenue today is worth $2.1 trillion. This trillion-dollar industry is often subjected to multiple risks associated with content distribution, rights management, and royalty payments to artists. Illegal streaming and downloading of content have resulted in multi-billion dollar revenue losses. According to another report, the industry is expected to lose around $51.6 billion due to copyright piracy in 2022.

Very often, pirated copies of digital music are made quite effortlessly and most of the attempts to prevent it from piracy have been vanquished. This in turn affects the royalty payments that are paid to the creators for the rights to use or publish their content. Also, payments are not always guaranteed and are based on traditional airtime figures. There is no effective way to control content distribution.

That was, until now.

Regain your control, creators!

With the revolution of blockchain, the industry’s incessant problems will now have apposite solutions. The use of blockchain technology can be extremely effective in solving problems like copy protection and royalty programs. The technology connects consumers, artists and all other stakeholders in the industry and provides full transparency over the distribution process.

Blockchain provides a network where every digital music is cryptographically encrypted to ensure its access only to paid customers. This payment mechanism for accessing the content is controlled by a smart contract, thus eliminating the need for a centralized authority. The payments are made automatically based on the logic embedded within these smart contracts and the permissions prescribed for the number of downloads. All transactions in the blockchain network are recorded and immutable, thus making the process completely transparent and accessible to all the stakeholders. This prevents illegal copying of digital music files altogether, consequently preserving the copyrights for creators.

Blockchain – connecting content & creators

The cryptographic feature of blockchain technology enables the creator to be tied to his/her content to avoid plagiarism. For instance, a digital music file on a blockchain network contains the owner’s information and the time stamp.  Both of these are immutable and traceable. The legal owners of the content are cryptographically linked to it. This ownership cannot be transferred to another user unless the original owner grants permission. Copyright transfers are easily managed and traced with blockchain as all transactions are recorded and cannot be tampered with. Smart contracts can then control all the distribution and payment to the concerned parties.

Blockchain technology provides owners of intellectual property (IP) with tools to better monitor and protect their work. Preventing plagiarism of any previously copyrighted content is just one of the many applications the blockchain technology has to offer. Blockchain for businesses will reduce the enterprises’ dependency on multiple security tools and has the potential to create high levels of trust for any transactions, thus enabling leaders to focus on better marketing strategies. This trust factor combined with the ease of use is driving the demand for blockchain amongst enterprises.

Blockchain for media & entertainment industry

The media industry is on the front lines of the digital revolution. By adopting multiple emerging technologies, the industry is enriching its user experience through data-driven insights that in turn build a strong brand value and engaging social media presence.

The media industry often faces the challenge of controlling ownership and distribution. Web3 applications allow creators to effectively monetize their art. Creators can also set up an NFT marketplace, apply smart contracts to profit from future sales and reward loyal fans who invest early in their success.

Here are some ways by which the media industry can mitigate the existing challenges:

Asset Management Security

The media industry has not been able to control the digital sharing of content effectively. Blockchain applications allow creators to verify identities, limit sharing and retain ownership of digital assets.

New Revenue Streams

Blockchain enables creators to sell exclusive assets as NFTs and retain a portion of the profits from these assets as they are traded further in the future. This accountability also affects streaming, where pay-as-you-go models empower consumers and reward artists directly.

Fan Connections

If an artist wishes to sell shares of their career at a launch party, it allows his/her fans to be part of their journey and reward the fans with loyalty as their career expands. It is similar to a fan club where multiple parties are involved to ensure revenue remains inside the creators’ community.

Blockchain in Television

Blockchain can vet digital assets and eliminate fake videos before they make it to the news. Consumers can pick the channels they want and pay only for the content they wish to consume. NFTs can create exclusivity in a streaming world where everything is available all the time.

Blockchain in Film Distribution

Blockchain can address challenges associated with identity, ownership and copyright. For instance, scenarios where an actor wants to be paid depending on the success of the movie, a studio wants to accurately price ads and product placements or when creators want to control access and ownership of work.

Blockchain in Music

The legacy of Napster lives on in the peer-to-peer sharing of music files to this day, and an entire industry had to adapt. But challenges with payments, stream tracking and payment distribution persist. Blockchain applications like smart contracts, NFTs and micropayments can be the apt solution that this industry needs.

Unique features of Blockchain

Blockchain is a promising technology backed by its three core strengths – transparency, immutability and impeccable security. Though known for its popularity in the banking sector, blockchain is a futuristic technology that is set to disrupt all verticals with its distinctive applications. Here are a few examples of how blockchain’s unique features can be applied to the media industry.

  • Immutability: Helps with censorship resistance.
  • Security: Orderly data structures which result in a high degree of auditability and reliability.
  • Transparency: Enables the visibility of ledger information across all users.
  • Resistance:  Prevents the alteration of data, eliminates asymmetric information.
  • Invulnerable: Blockchains are also distributed, which means there is no single point of failure or attack vector for hackers or other malicious actors.

Benefits of Blockchain in media and entertainment

The music business is still primarily operated on legacy systems and antiquated business models which were developed at a time when songs were predominantly distributed offline and not released on the internet. Only a few competitors have managed to keep up with digitization, and they now control the streaming business, squeezing off income for artists.

The open and decentralized nature of the public Ethereum platform will allow actors in the entertainment industry to reap the following benefits:

  • Decreased IP infringement
  • Disintermediated content from industry intermediaries
  • Direct monetization of all copywritten assets through smart contracts and p2p micropayments

Digital piracy, fraudulent copies, infringed studio IP and duplication of digital items cost the US film and TV industry an estimated $71 billion annually. Enterprise Ethereum enables artists and creators to digitize the metadata of their unique content. It also manages and stores IP rights on a time-stamped and immutable ledger. Blockchain, with its append-only structure, makes it easier for creators to legally enforce their rights if infringement happens .

Best practices to integrate Salesforce and SAP

The integration between Salesforce and SAP will help achieve better brand value and profits for any organization. Successful integration of these two powerful enterprise platforms creates the potential to strike an equilibrium between business operations and customer relations, which is integral for the success of any product a company launches.

Salesforce is a leading cloud-based customer relationship management platform that provides customized applications and software solutions for sales, service, marketing and analytics. It unites multiple functions of an organization from anywhere with Customer 360, a Salesforce-integrated CRM platform, that powers the entire suite of connected apps.

On the other hand, SAP stands for systems, applications and products in data processing. It is a leading producer of software used for the management of business processes, developing solutions that facilitate effective data processing and information flow across organizations. SAP assists enterprise software to manage both their business operations and customer relations.

A Salesforce SAP integration brings in the possibility of better customer service and enhances business profits. The integration helps organizations across industries, irrespective of its size, run their businesses profitably, adapt continuously and grow sustainably.

Businesses that undergo the Salesforce SAP integration have a unique advantage in managing and tracking customer relationships. While Salesforce manages front-end information about its customers, SAP manages back-end tasks. Organizations that integrate the two, benefit from additional business functions through enhanced productivity and value-added insights. This, in turn, improves customer interaction and experience.

Benefits of Salesforce SAP Integration

Organizations can gain numerous advantages by leveraging the integration of Salesforce and SAP. Some of the benefits that this integration provides, include:

  • Effective data management with business intelligence capabilities
  • Improved invoice creation
  • Real-time error management with troubleshooting services
  • Processing of orders in real-time for optimal outcomes
  • Accelerated cash flow, thereby maximizing return on investment (ROI)

The Salesforce and SAP integration is crucial to bring in a 360-degree view of customer data, which enables a seamless customer journey. However, despite its prevalence, getting the integration right is a challenge that many organizations face. The implementation of the integration might not be very straightforward as both, Salesforce and SAP, are complex solutions built to be proprietary and standalone offerings. Neither of the platforms is designed to work with other software, and here’s why.

SAP is built as a back-end solution and many of its offerings were created much before the age of cloud computing. On the other hand, Salesforce is a cloud pioneer and created for front-end use. Therefore, a seamless integration requires a planned approach to account for the technological differences between SAP’s on-premises solutions and Salesforce’s cloud-based solutions.

When an organization plans for Salesforce and SAP integration, mapping out its process in advance will be the best place to start. Approaching the integration through the lens of a process rather than a simple data mapping or end-to-end connectivity will make the business more scalable and efficient. The rapid adoption of cloud technology is impacting how enterprise application support is developed and implemented to assist in the same.

Challenges of integrating Salesforce and SAP

Although the Salesforce SAP integration could translate into multiple benefits for organizations, its implementation could come with its own set of limitations and challenges. Any successful integration requires the right approach and the right set of enabling technologies. Apart from the core approach, other technical issues could include adapters and interfaces, communications, semantic mediation, format conversion and security.

Below are some of the most common challenges encountered during the integration:

  • Technical disparities as SAP is an on-premise software whereas Salesforce is a cloud solution
  • Difficulty in synchronizing data from SAP to Salesforce
  • Salesforce users generate quotes using products and price book information that need to be linked with corresponding opportunities in SAP. Whereas, product and pricing information in SAP needs to be synchronized with Salesforce.
  • The need to process data in Salesforce before reaching the order and execution phase in SAP also needs to be considered.
  • Relevant data such as related order history and current financial status need to be accessible in real-time for the Salesforce user.

The idea is to leverage the best approach that is consistent with the organization’s integration requirements. Of course, there are several options to achieve optimal integration according to those requirements. The importance of managed IT infrastructure grows manifold during such important operations, a point that every organization must take note of.

Design solution in cloud integration

Below is a graphical representation of the technical workflow, showing the different steps involved during the integration of Salesforce and SAP.

salesforce-sap-integration

Step-by-step process of SAP and Salesforce integration

The main steps for integration of SAP and Salesforce are as below:

  • Log in to the Salesforce development account
  • Go to the setup option and type API in the Quick Find box
  • Download the required WSDL file
  • Use the Salesforce WSDL to create the required SOAP project
  • Create WSDL/XSD with the help of the following steps
  • Create an upsert request using SOAPui
  • Subsequently, create XSD for that request using external tools
  • By importing the external definition, that was created in the previous step, create another ESR object
  • Message map by creating SOAP requests
  • Next, an API lookup code is needed for the session ID and server URL
  • Thereafter an ID configuration is needed
  • In order to do that, first create 2 ICOs by getting data from ECC and then sending it to Salesforce

Conclusion

As one of the most widely used enterprise resource planning solutions in the market, SAP plays a key role in the most critical aspects of business processes for many companies. To fully automate and optimize these business processes, companies need to integrate SAP with other applications within their organization. Integrating Salesforce and SAP is an essential step since there is a central need to bring these frameworks together to meet the prerequisites of the business.

The individual benefits of both Salesforce and SAP when combined during their integration empower organizations with the ability to turn vital customer data into meaningful and actionable insights. This will allow decision-makers to make crucial business decisions faster, streamline business processes, boost productivity and therefore gain a competitive advantage in the market.

Infovision with its cloud migration expertise and as a managed service provider with its technical staffing services is a trusted partner for achieving Salesforce SAP integration.

Planning Poker: A Proven Technique to Enhance Business Agility

The famous adage: “the whole is greater than the sum of its parts” fits so well in the context of how teams function. The same goes well in the context of an organization. Teams are the building blocks of an organization and have a significant role to play in fostering business agility.

In a team the overall performance of the unit is critical to achieving shared objectives and agility is a big accelerator that helps accomplish this goal. Most of the organizations are adopting agile or scaled agile frameworks in their digital transformation journeys to achieve business agility. As already mentioned, team agility is an essential ingredient if one has to achieve business agility.

In any routine project, given that different teams work on different types of user stories or tasks, a proven practice for planning and estimation that is used by agile teams is the Planning Poker technique. Based on the team requirement this technique can be used in a variety of team settings. This blog focuses on the ‘what’ and ‘how’ of this popular consensus-based estimating approach.

How Does the Planning Poker Technique Work?

The Planning poker technique is used during sprint or iteration planning by the team to estimate the story points for user stories. The Scrum master facilitates this event and the product owner and the team members are the active participants. It uses modified Fibonacci series 0, ½, 1, 2, 3, 5, 8, 13, 20, 40, 100 to estimate the size of the user story. The modified Fibonacci series has the numbers in increasing order from 0 to n to capture any uncertainty. The set of poker cards is placed in centre of the table. There are multiple cards, i.e. ten cards for number 5, ten cards for number 8 and so on which represents the numbers from the Fibonacci series. When the product owner shares the user story, all the team members go through it and pick one poker card from the table, based on the best number which fits their estimate.  They are not supposed to reveal the estimated number card immediately.  They then place their card upside down on the table. On the instruction of the scrum master all the members then reverse their respective cards at the same time.  The number/estimate that the card reveals is then discussed for variations along with the thought process on why the team member chose the given number. Shared understanding is created through thought-provoking discussions, and the scrum master repeats the planning poker technique until the team members reach suitable consensus on the estimate.

This process of estimation by each team member and the discussions that follow, help the team members understand different aspects involved in the user story. In this way planning poker technique helps in getting a better or accurate estimate for the team backlog and smooth execution of the sprint or iteration. It has been observed that over a period the team members become capable of good estimation and the team is able to handle and accommodate in between additions of the user stories by the product owner. This leads to increase in the confidence of the team members and overall improved functioning of the team. In a way planning poker technique is designed such that it enhances self-organizing characteristics of the team. So, the planning poker technique is used for agile estimations within the team for better and healthier sprint delivery.

Common Challenges

Here’s why it is important to have a consensus-based estimation technique.

  • In most of the teams, a majority of members have familiarity with the waterfall way of working, where complete scope is defined and the team members work on the tasks assigned to them.  In such a scenario they don’t know the scope of overall tasks and how much time it would take to complete the tasks. When the same team members start working in agile teams, the approach is contrary so they face difficulty in estimating and meeting sprint goals.
  • Teams are dynamic.  There is always a situation where some team members leave and new ones join. The scope of work also keeps changing, new additions are uncertain. In Agile teams one should always take the right decision and move forward.
  • Wrong estimates by the scrum master or the product owner or the team leader – the people who usually estimate on behalf of team – can lead to dissatisfaction which in turn impacts delivery.

When is Planning Poker Required?

This technique is very helpful as it brings clarity and helps in faster delivery for teams working in
sprints or iterations. This is true especially when,

  • The scope of the work is new or complex to address the unknowns,
  • The team is consistently not able to meet the sprint goals, 
  • the requirements are changing, and customer priority is changing,
  • when the team is newly formed and starting on execution,
  • when teams are transforming from waterfall to agile,
  • when overall performance of a team needs improvement,
  • when the root cause is problem in estimations.

Driving Scrum Values

Scrum values create strong and resilient teams. Planning Poker drives the following scrum values within the teams.

  • Commitment: By using Planning Poker for estimation, all the team members commit to the sprint goal.
  • Focus: Since the team members discuss about the different aspects of the user stories, the team members are equipped with the required information to start focusing on the task.
  • Openness: The team is encouraged to openly discuss about the user stories. This helps in bringing out the unknowns.
  • Respect:Since all the team members discuss and mutually participate in the estimation it creates a mutual respect for all team members as this is a way of learning and understanding each other.
  • Courage: In a majority of situations, a lot of the team members do not speak and tend to follow the leader blindly. The Planning Poker practice encourages all the team members to participate and be courageous.

Due to the many benefits, agile teams like Scrum, Scaled Agile Framework, Kanban and hybrid agile
teams practice planning poker.

This technique increases the predictability of team sprint goal leading to higher levels of collaboration, performance and team agility. Team agility leads to increased predictability in program level and enhances business agility thus leading to win-win for the team members, organization and customers.

Why Salesforce flow is the new beginning for administrators

Simplifying business automation processes without coding has been one of the many remarkable features of the Salesforce platform. Coming from a .Net background, I was amazed to see the capabilities that Salesforce’s Workflow Rule brought in. Then came the Process Builder in 2015, enabling us to not just update parent records but also create new records and update related records. Process Builder became a tool for system administrators that could reasonably compete with Apex.

When Salesforce introduced Flow, I considered it an over-complicated and heavy version of Process Builder and Workflow Rules. I saw no compelling reason to adopt it as my business automation tool. If the work gets done through Process Builder, that is what I would do.

Why then did I board the Flow-train?

Initially, Flow was built on the Cloud Flow Designer. In its spring 2019 release, Salesforce introduced Flow Builder, a declarative interface that provides faster and more intuitive front-end for creating individual flows. I started experimenting with Flow about years ago to explore the new features and my perception about it changed for good. I boarded the Flow-train and decided to not look back, and here is why:

  • Records are created and updated quickly using Flow as compared to Process Builder and Workflow Rules.
  • Salesforce expanded the Flow’s capabilities to include both record-triggered and schedule-triggered Flow.
  • Salesforce announced that Process Builder and Workflow Rules would no longer receive product updates. Salesforce Flow will be the new tool for declarative process automation.
     

What else did Salesforce say?

In June 2020, Salesforce published a blog on what will be considered the best practices for any business process automation. The blog had three main takeaways:

  1. Use a ‘BeforeSave’ Flow to update a Salesforce record as it’s faster than Process Builder (it could possibly outperform Process Builder by 10x).
  2. Use an ‘AfterSave’ Flow to create records or send emails. This will increase the performance for end users as compared to Process Builder.
  3. If the logic of a Flow gets very complex then it’s probably best to use Apex coding.

Salesforce is planning to invest and enhance heavily on Flow. Therefore, it’s only advisable to start
leveraging Flow and considerably reduce the dependency on Process Builder or Workflow Rules.

But is Flow the only tool? I have asked this question myself every time I read Salesforce release notes. Undeniably, Flow is evolving at an incredible pace. With the Winter 21 release, it can be scheduled, triggered on record updates, called by platform events and even be visible or invisible to the users. With Flow, you can reuse without rebuilding; you can build incredible complex business logic and reuse across multiple Flows. It can do what Workflow Rules, Process builder and Approval process can do.

Therefore, to answer the question, I think Flow will soon become the most go-to tool for administrators to automate all business process in a declarative way.

Adios, Process Builder and Workflow Rules?

Well, not yet.

Salesforce will continue to allow system administrators to maintain the existing, as well as create new Workflow Rules and Process Builders, but it’s not going to be enhanced any more for sure.

Salesforce, in Dreamforce 21 announced the retirement of Workflow and Process Builder. Patrick Stokes, the product manager responsible for the retirement, has reassured that there will be a formal roadmap that is governed by an end-of-life council. In the spirit of transparency, here are the stages they have planned:

  • Spring’ 22 release: Launch migration tool for Workflow Rules
  • Summer’ 22 release: Launch migration tool for Process Builder
  • Winter’ 23 release: Will no longer be able to create new Process Builders or Workflow Rules

My Recommendations

  • Don’t create new Process Builders or Work Flow Rules in your organization. Instead,
    transition to Flow and get comfortable with it.
  • Never forget the complexities: if the existing Process Builder or Workflow Rules are causing
    problems, then move to Flow instead.
  • If the existing Process Builder or Workflow are working fine, then let them be.

The Relevance of Simulated Phishing Campaign in Today’s World

To understand the relevance of simulated phishing campaigns, especially in today’s times, one needs to learn all about phishing and how it is weaponized to target employees of an organization. In this blog, I explain how phishing attacks work.   The objective is to enumerate the importance and impact of proven preventive strategies such as simulated phishing campaigns, in organizations.

What is phishing?

Phishing is an organized multibillion-dollar cybercrime business. The attackers pose as a legitimate organization or individual and contact their targets (employees), through email, telephone, or text message. The attackers then lure employees to give away the organization’s sensitive data and compromise the critical infrastructure.

Phishing is done to gain a foothold in corporate or government networks. Almost 80% of phishing attacks are done through emails. The email recipients are tricked into clicking malicious links or downloading executable files, which leads to the installation of malware for data exfiltration and ransomware attacks.

There are different types of phishing attacks including email phishing, vishing, whaling, smishing & spear phishing. 

How is phishing weaponized?

The attack follows a phase-wise approach as described below:

Why Simulated Campaigns?

When it comes to securing an organization, employees are the weakest links because they are often the prime targets for cybercrimes. Phishing attacks are the easiest and most effective means to target employees. Today, phishing attacks are increasing despite having all anti-phishing measures in place. Therefore, employees need practical training to defend and keep these phishing attacks at bay.

One of the best ways to increase awareness about phishing is through simulated targeted phishing campaigns designed for all internal and contract employees. These campaigns should be run at regular intervals so that employees not only start becoming aware but also develop appropriate reflexes to differentiate between genuine emails and spam.  Our research confirms that a simulated phishing campaign is more effective in educating employees than any other method or strategy.  This is reinforced by the data that revealed a marked improvement in scores when retests were performed.

As per our previous simulated phishing campaigns delivered to our customers, we observed that on average, 25% of the phishing emails were opened, and at least 15% of them ended up giving away sensitive information or downloaded executable files. One of every three customers’ existing anti-phishing solutions was ineffective in stopping the phishing campaign. Note, the above statistics were recorded despite the customers delivering in-person awareness training.

InfoVision’s Enterprise Cybersecurity & Risk Services (ECRS) practice offers simulated phishing campaign services and various anti-phishing technology controls to customers. The customized simulated phishing campaigns are effective not only in educating employees but also evaluating the existing anti-phishing control’s effectiveness in preventing these attacks.

For further details about paid and free phishing campaign services contact us at digital@infovision.com.

6 Changes in Consumer Behavior Due to Digital Transformation In Retail

The traditional brick and mortar model of retail has been under siege by digitally enabled online and mobile channels. Consumer behavior in digital retail has undergone immense change due to digital technologies which are at the core of this transformation. However its successful implementation requires careful planning and cross collaboration across various retail functions.

The rise of the internet and proliferation of digital transformation has had some profound and unexpected effects on 21st century life. While the term digital transformation may seem like a vast and hazy concept it is fairly easy to understand it, when one sees the constantly evolving changes in the retail sector.

A careful analysis of the trends and consumer behavior driving this transformation can help retailers maintain their focus and achieve tangible benefits.  The new generation of customers is more informed and prepared to move through this environment.  Research mobility has increased with the technology available such as computers, mobile phones, tablets and so on.

The baseline is that consumer buying behavior is evolving in digital retail which in turn is pushing for continuous transformation.

What drives consumer behavior?

Business houses such as Uber and Amazon, have demonstrated to the world what digital transformation can do to an organization.  They have taken traditional business concepts such as retail sales and transit and applied innovative digital technology that has left their competitors far behind. A new consumer-brand relationship has risen with the emergence of the internet. Customers expect relevant content in relation to what they are doing anytime, anywhere, in the format they desire and on a device of their choice. By having access to this information, the consumer can then collate and analyze the information slowly and arrive at an informed choice. This educated consumer then crosses several digital platforms to make a purchase which could be online or offline. It is undeniable that users now prefer applications which are more agile, provide responsive and convenient navigation making the choosing process as pleasant an experience as the actual buying of the product.

How is digital transformation keeping pace with changing consumer behavior

The disruption of digital technology and its impact on the consumer behavior in digital retail has very real implications. How brand owners need to adapt their communication strategies in order to build successful and meaningful relationships with consumers today will be their formula to stay in the race.

Here are the top trends that retailers need to focus on with respect to consumer behavior, in order to surge ahead of competition

1. Use of mobile apps

Mobile based purchases are outpacing the growth of even online retail. Apart from the fact that bigger and powerful smartphones are enabling better shopping experiences, the mobile is also emerging as a strong connector across all retail channels, linking in- store and online modes of shopping. In addition to basic information, consumers also have access to loyalty programs, real time check on store inventory and improved customer engagement.  This convenience in the mind of the consumer is irreplaceable and influences consumer buying behavior.

2. Mobile based payment applications

Mobile based retail payments can be made either in person at the point of sale or can be done remotely via mobile apps or browsers. Mobile payments provide a seamless experience to consumers from their smartphones. The consumers today opt for such payments because of convenience and ease of use, the rewards and discounts such payments provide and it also has the necessary compliance and security features in place.

3. Social media

Social media has made giant leaps in today’s world be it for staying in touch or for access to the latest information or to keep up with the current trends. Many of today’s consumers tend to frequent social media sites such Instagram, Facebook, Twitter, Pinterest and YouTube. The consumers have an opportunity to familiarize themselves with the product as well as compare and contrast products with reference to design, color, prices and so on through these sites.  They have realized that the content marketing on these social media sites is organic, relevant, and adds value. The additional benefit of product recommendation/purchase coupled with social mixing is a trend that consumers are making the order of the day.

4. Voice recognition and virtual reality

Voice Enabled Search such as Apple’s Siri, Amazon’s Alexa and Google’s Assist are changing the way consumers look for retail products. Searching for products and services using natural speech makes the entire process frictionless and faster. The convenience of such voice recognition systems is driving today’s consumer buying behavior towards a comfortable experience. They are looking for more and more services and products to be channelized through this mode. Another important aspect of changing consumer behavior in digital retail is virtual reality which allows  for consumers to experience the reality of the product in the virtual world. This gives the customer a more realistic picture about the product they intend to purchase.

5. Customer insights

Earlier an individual approach was associated with luxury brand shopping albeit in real time. Today with digital transformation, consumers find that customization is available for several more products and at affordable prices. The ability to customize based on individual needs engages the customer in a better way since it makes the customer feel the product was tailor made for him/her.  Customization gives consumers better value of money and better access to all products which were previously unavailable due to size difference or color variations or such.

6. Reverse showrooming

Reverse showrooming occurs when a customer browses and researches products online but purchases the product at an actual store. Apparel and furniture benefit from this kind of retailing. This is extremely beneficial for consumers since they enter a store armed with all the information they need before purchase of a product. Their online browsing would have given them all there is to know about a certain product such as origin, material used, colors available, availability at store. The last step to have an actual feel of the product is completed at the store thereby hastening the process of a purchase.

Conclusion

The retail industry is one of the most rapidly evolving verticals across the worlds. A change in consumer buying behavior is due to this digital transformation which empowers today’s consumer with deeper knowledge that l assists them in making the best choice based on their needs.