Red teaming: The future of cybersecurity preparedness

With the global cost of a data breach skyrocketing to an average of US$4.35 million in 2022, businesses are turning to technology advancements to safeguard their information and unlock new commercial opportunities. And, as companies ramp up their security efforts, a myriad of cutting-edge services and standards are emerging to help them stay ahead of the curve. Insurers are taking notice – and so should businesses!

To proactively tackle security threats, businesses often conduct vulnerability analysis of their internal networks or perform penetration tests on live websites, which can uncover security flaws with ease. But these measures alone are not enough.  Data proves that businesses continue to fall victim to cyber-attacks, leaving sensitive data of customers, workers, or business partners vulnerable to theft or sale to competitors.

Enter Red Teaming- a comprehensive approach to security that not only combats vulnerabilities but also creates robust fences.  In this blog, we will explore the power of red teaming and learn how it goes beyond technical aspects by additionally taking into account human factors such as social engineering, physical security, and other parameters that attackers may use to gain access.

Not reactive, but proactive

The evolution of red teaming as a security technique can be traced back to the military, where it was used to simulate attacks and test defenses during training exercises. Over time, businesses and government agencies began to adopt this approach to assess their own security posture and identify weaknesses in their defenses, before real attacks occur.  Red teaming pretty much adopts the same premise as ethical hacking.  The objective is two-fold: firstly, to assess the preparedness levels and secondly, to identify real-time gaps.

One specific business need that red teaming addresses is the need to be proactive in the face of an ever-evolving threat landscape. Traditional security measures such as vulnerability scanning and penetration testing are reactive in nature, meaning they are designed to detect and respond to existing threats. Red teaming, on the other hand, takes a proactive approach that simulates attacks from multiple angles, allowing organizations to identify gaps in their defenses and strengthen them before an actual attack occurs.

Red teaming Vs Penetration testing

Although red teaming and penetration testing share the objective of detecting and addressing vulnerabilities, their approaches differ in achieving robust security and creating a safer business environment. Red teaming is a comprehensive and methodical approach that involves a full scope cyberattack simulation to identify vulnerabilities and prevent attacks in any environment. In contrast, while penetration testing is essential, it is only a small part of what a red team exercise accomplishes. Penetration testers typically aim to gain access to a network, whereas red team operations have more ambitious goals.

Red team exercises evaluate defensive tactics and produce thorough risk analysis to simulate a more realistic advanced persistent threat (APT) scenario. Red teaming is a larger concept than penetration testing. Red teaming includes evasion and persistence, privilege escalation, and exfiltration, whereas penetration testing just simulates the first step in the cyber kill chain.

As Cybersecurity Ventures predicts, the global annual cost of cybercrime will reach $8 trillion USD in 2023.  This and many more such data points drive home the message that cybersecurity experts must continuously innovate and develop more advanced solutions to combat cybercriminals and respond to emerging threats.

In the above context, implementing red teaming within an organization can provide numerous benefits, such as:

  • Assessing the organization’s defense system through simulated cyberattacks to determine the security level of policies
  • Categorizing related assets according to risk level
  • Detecting and exposing security vulnerabilities and loopholes
  • Evaluating the effectiveness of the organization’s security system during an attack

Typical red teaming approach

1. Foot printing & reconnaissance: 

Foot printing & reconnaissance is one of the pre-attack phases, which refers to the actions taken before the attack itself. In other words, the examination of the security posture of the target organization’s IT infrastructure begins with foot printing. A hacker can gather the following data during this stage:

  • Domain name
  • IP Addresses
  • Namespaces
  • Employee information
  • Phone numbers
  • E-mails
  • Job Information

2. Network penetration testing and application security testing:

Network penetration testing focuses on identifying vulnerabilities in an organization’s network infrastructure, including routers, switches, firewalls, and other network devices. It aims to simulate an attacker’s actions to gain unauthorized access to sensitive information or compromise the network.

Application security testing involves assessing the security of software applications, including web applications, mobile apps, and other custom-built software. The objective is to identify vulnerabilities that can be exploited to gain unauthorized access, manipulate data, or disrupt the application’s normal functioning.

3. Social Engineering:

Social engineering is a manipulation technique that exploits human error to gain private information, access, or valuables. In cybercrime, these “human hacking” scams tend to lure unsuspecting users into exposing data, spreading malware infections, or giving access to restricted systems. Attacks can happen online, in-person, and via other interactions.

Generally, social engineering attackers have two goals:

  1. Sabotage: Disrupting or corrupting data to cause harm or inconvenience.
  2. Theft: Obtaining valuables like information, access, or money.

4. Reporting and analysis:

After the red team’s simulated attack is complete, you’ll go through a reporting and analysis process to determine the path forward. You’ll see how your blue (defensive security) team performed and which key vulnerabilities need to be addressed.

The red team prepares a comprehensive report detailing their findings, including vulnerabilities exploited, attack paths, and recommendations for improving security. The report is typically shared with the organization’s management and relevant stakeholders.

5. Closure:

Once the attack is over, the final closure stage begins. This stage does not only mean managing the leftover digital remnants of the executed attacks. It also means providing the Blue Team with one or more evaluation sessions where the complete timeline is replayed in a workshop to maximize learning and awareness. The result of this phase is a detailed technical report and a perspective on your overall security maturity in your threat landscape.

Why strategize red teaming for your business?

Investing millions of dollars to protect your network may make it seem unwise to employ a team of bug hunters to intentionally breach it. However, as enumerated above in the article the ever-evolving threat landscape makes it imperative to have adequate guards in place.  

To learn more about an effective red team strategy that can ensure optimal security for your organization, contact the InfoVision Red Team at  info.ecrs@infovision.com.

Top Testing Trends in 2023

The recent attention brought to Google’s first demo of Bard reinforces the value of thorough testing prior to releasing products to the public.

Following the rocky start after the first demo resulted in a factual error, Google’s CEO sent a company-wide email calling on every employee to help shape and contribute to the product.  “Next week, we’ll be enlisting every Googler to help shape Bard and contribute through a special company-wide dogfood,” Pichai wrote in the email to employees, as per CNBC. “We’re looking forward to getting all of your feedback — in the spirit of an internal hackathon — more details coming soon,” he concluded.

This process, known as “dogfooding,” entails testing a product internally before releasing it to the public.

The post-pandemic world has undergone a significant shift from digitization towards digitalization.  Staying informed about industry developments is therefore paramount for both organizations and individuals to prevent errors from occurring as also to stay ahead of the curve. What worked in the pre-pandemic world may no longer be relevant, and there is no room for error in today’s digitalized world.

Testing Challenges

One question that naturally follows is: What impact has Digital Transformation had on the Automation field, and specifically on the challenges of software testing?

Despite the advancements brought about by Digital Transformation, the challenges of software testing have largely remained unchanged, with a few new ones added to the list.

These challenges are numerous, and some examples include:

  • Time constraints: Testing can be a time-consuming process and often compromised to meet project deadlines.
  • Resource allocation: Testing requires a variety of resources, including hardware, software, and personnel, and this can be a crunch.
  • Simulation: Setting up a testing environment that simulates real-world conditions can be challenging, especially when dealing with complex systems.
  • End-to-end coverage: Ensuring that all possible scenarios have been tested, especially in large and complex systems can be a Herculean task.
  • Automation: While automation can make testing more efficient, the automated tests need to be dynamic to align with the software that may be constantly changing.

The constantly changing planet of technology is swiftly altering the operations of organizations, affecting every stage of the development lifecycle, including planning, design, development, delivery, and operation. Quality at speed stays at the core of all.

To keep up the exceptional quality at speed, organizations must continuously revamp and innovate their tools and practices to meet production expectations. And there comes the hare footed software testing trends into picture.

Exponentially expanding complexity of systems and environments also trigger snowballing data.  All these lead to enduringly shifting software testing trends that will be the focus of this article.

In my opinion, the following list represents the top ten software testing trends that could emerge in 2023.

1. Hyper-Automation Testing

In simple words, hyper-automation is the active automation, performed using Robotic Process Automation (RPA), Machine Learning & Artificial Intelligence, and Natural Language Processing (NLP).

As technology disruption rushes through space, organizations are shifting towards Hyper-Automation targeting cost cutting, better productivity, and augmented efficiencies through automation. Moreover, Hyper-Automation aids in capitalizing on the data collected and generated by digitized processes.

In short, AI-powered testing tools can find bugs and defects more quickly and accurately than manual testing.

2. Shift left Testing

Imagine if the testers were called only at the end of the project lifecycle, it would be so strenuous for them. Errors and bugs in every functionality would be utterly difficult to trace and rectify. So, defects are less costly when detected early!

That said, don’t you think it would be an ideal use of your resource if they are used to their full potential?

The approach of software & system testing is performed earlier in the lifecycle, in fact in every step of the lifecycle. As the name says, testing shifts the project one step left on the timeline every time.

3. Automated Regression and Regression Optimization

The Automated regression testing technique is becoming mature over period. It is no more a ‘good to have’.  It has become the hygiene factor of any software development process. After any little or big change, the code or part of the application goes into regressive testing immediately.

The testing process is made remarkably effective with test scripts, plans, and workflow to speed up the process and make it more effective. These regression models involve the use of an optimization algorithm to find a set of coefficients for each input to the model that minimizes the prediction error.

4. Script less or Codeless Automation

Have you heard of the jargons no-code, codeless, and script less? Well, meaning the same, script less automation refers to the testing being done using tools and frameworks that require little-to-no code in a variety of ways.

In this testing technique, the software is tested on another software, comparing the actual outcomes with the predicted outcomes.

5. Cloud-based cross-browser Testing

Well, it is a no brainer that a cloud-based solution saves infrastructure set up and maintenance cost. And when it comes to Cloud-Based Cross Browser Testing, it is indeed the need of the hour. Today, when there is a planet of options while picking a device, making sure that your application runs swiftly in all the multifold devices, platforms, browser combinations, and everything under the sky is something where the challenge really begins.

Cloud-based cross-browser testing is therefore the ‘go to’ solution that provides the flexibility and scalability to quickly test applications on different platforms and devices.

6. Non-Functional Testing

Higher-end security & top-level performance is the first on everyone’s list. And, non-functional testing is what comes to my mind!

As the name speaks for itself, non-functional testing is software testing where you test the non-functional parameters. Now, these parameters can range from reliability to load test, performance, and accountability. So, this test essentially evaluates the behavior of the application/system.

7. Agile and DevOps

The Automation testing stream is expected to hit the milestone of USD 30 Billion by 2028. Bidding the same idea, agile development, and testing are making a standout presence these days. The enormous processing pace gives agile testing wide acceptance.

Agile Automation techniques aid organizations to stay ahead in the race with ever-changing business models and optimization of quality assurance automation tools and approaches.

Unlike the waterfall model, DevOps bridges the gap between Development and Operations, curtailing the software lifecycle. In short, Agile and DevOps assist in delivering software with quality and speed.

8. Blockchain Testing

Blockchain applications, unlike the traditional applications, are quite different. And so are the testing techniques.

The structure of blockchain involves several components such as blocks, mining, transactions, wallets, and so on, all of which require special tools to test. Therefore, Blockchain testing is the systematic evaluation of the blockchain’s various functional components.

This testing technique is used to test the security, functionality, and performance of the digital data structure.

By testing every entity of the blockchain, this technique confirms every operational and functional aspect of the network, thereby providing a secure and functional infrastructure with improved user experience.

9. Mobile Test Automation

A good starting point for someone just getting started. Mobile application (app) testing is the process of verifying whether an app designed for mobiles meets certain mobile testing requirements.

What are the types of tests that you can perform on mobile apps? A few of the tests that you can perform on mobile apps are performance testing, stress testing, functional testing, and accessibility testing.

10. Security Testing

 With the increasing number of cyber threats, security testing has become a critical aspect of software development. Organizations are implementing security testing at all stages of the development process to ensure that their applications are secure from attacks.

Final Words

So, that is a whole lot of automation testing trends that you can perform. And, surprisingly, the list does not end there. Yes! The list of testing trends keeps expanding as technology keeps maturing.

In conclusion, the testing landscape continues to evolve. The innovative technologies and methodologies prompt organizations to embrace the trends and position themselves to deliver high-quality software efficiently and effectively.

If you want to find out more about the testing trends & uncover insights to strategize your plans and deliver more reliable solutions, you can write to us at digital@infovision.com  

Driving Agile Success Through a Strong Product Mindset

Ever since the inception of the agile manifesto in early 2000, there has been widespread adoption of the agile methodology in every aspect of software development especially when it comes to large projects.  ‘Individuals and interactions over processes and tools’ is one of the four principles of this manifesto. This means that it is the people who drive the development process and they take precedence over processes and tools.  The implementation of this principle has resulted in the successful development of many products.

A Digital.ai report states that 94% of organizations are doing at least some work in agile and over half of them said that the majority or all of their teams have adopted agile. In addition, 89% of a survey’s respondents said that high-performing agile teams have people-centric values, clear culture, tools and leadership empowerment.

Interconnected concepts of Agile

The Agile product development process is based on three interlinked concepts – people, process and product (PPP). Agile is most commonly associated with processes, occasionally with people and seldom with products.  In today’s era of distributed teams, people work from various locations to develop and enhance a product or solution, together. Despite having good collaboration, sometimes,  team members lose sight of the bigger picture during the course of development. The team remains unaware of how the functionalities they build contribute to the greater value generated. This void is often encountered especially while working on big, long-term projects where disparate teams are involved.  How does one cross the chasm then?

Orienting oneself to a ‘product mindset’ is the medium that will help you traverse the path mainly because the focus then remains steadfast on the ultimate value that gets delivered.  To put it simply, a product mindset is an ‘outside in’ approach that pivots around delivering the best outcomes to the customers in whatever product we develop – be it a service or creating a solution.  Before we proceed, it’s important to note that in this article, when we refer to the “product development mindset,” we are not specifically discussing the process of developing a software product. Instead, this mindset applies to all aspects of a software development project and is relevant to everyone involved.

Incorporating a product mindset while working in agile teams becomes very advantageous as it conveys the bigger picture to all the parties involved. The results of the development process also improve tremendously if everyone has a clear vision of the project.

Agile product development relies on the principle of delivering frequently and incrementally and receiving feedback early in the product development lifecycle. This approach demands vertical slicing of the product in development. Vertical slicing is a technique used in agile software development that focuses on delivering an end-to-end working subset of the whole system.

Agile frameworks including Scrum, SAFe and LeSS are all built around the framework of vertical slicing.

Product vs Project Mindset

The agile approach to software development necessitates a mindset that prioritizes the end goals.  Although agile methodologies have been in use since the early 2000s, teams tend to have a stronger inclination towards a project-oriented way of thinking. As software development teams began embracing agile practices, it is observed that the product mindset has also grown among different teams in an organization. Shifting from a project to a product mindset is a gradual transition. As the agile methods in software development became more widely accepted, the level of product mindset has also increased across different teams in an organization.

A simple piece of functionality that is part of a bigger product becomes a task for a team. While following an agile development system, these tasks are further classified into features (broader perspective) and user stories (detailed level). Many a time, there is a conflict in understanding the difference between project and product, as the terms are widely used corporate jargon and frequently interchanged. However, these are quite distinctive from each other and they generate entirely contrasting ways of carrying out work.

What is a Project Mindset?

The term ‘project’ is often used in varying contexts within organizations. Its definition can vary widely, ranging from small development tasks to large-scale technology proposals. In fact, any small or big task that needs to be executed is commonly referred to as a project. In traditional non-agile environments, the process of product development is divided into smaller tasks and functionalities. These smaller tasks are often referred to as ‘projects’ even though they are part of a larger product.

Associating every single task to a project may help in tracking the progress of the development, but it creates a ‘project mindset’ in developers. A project mindset limits an individual’s creativity to time constraints and delivery goals, making them lose track of the overall vision. The level of creativity and curiosity takes a serious hit if every piece of work is done with a project mindset.

Product Mindset

A product mindset on the other hand encompasses a set of ideas and principles that enable visualization of how a specific functional output fits into the overall development of the product. This mindset can be cultivated only with a complete understanding of the larger picture.

The level of product mindset also varies according to the teams. For instance, a team that has limited exposure to only development will have a lower level of product mindset compared to a team with experience in development and testing. Similarly, a team that has experience in development, testing and maintenance processes will have the highest level of product mindset. Adopting a product mindset enables quicker realization of the overall vision, improving collaboration and delivery to a large extent.

Having a product mindset renders multiple advantages, including:

  • Improvement in quality and functionality of the product
  • A shared understanding and collaboration among teams
  • Enablement of an agile approach for delivery
  • Increased agility as the integrated system is built with product overview, reducing bottlenecks
  • Sparking new ideas and encouragement to innovation
  • Simplification of complex systems
  • Increase in customer-centric thinking
  • Increase in output and productivity

Varying Levels of Product Mindset

A team’s level of ‘product mindset’ is heavily influenced by their participation in the product development process. Generally, teams that contribute to the innovation stage or the minimum viable product (MVP) stage exhibit a stronger product mindset. The MVP stage is the most rudimentary version of a product that meets the minimum requirements while also allowing for future enhancements.

The levels of product mindset of individuals also vary based on their roles. Generally, those in positions such as product owners, product managers, and design experts tend to have a higher level of product mindset. However, the extent to which one adopts either a product or project mindset can also be influenced by their professional journey. Those who are familiar with traditional ways of working, such as the waterfall model, are more likely to have a stronger project mindset than a product mindset.

Although adopting a product mindset generally offers numerous advantages, there are circumstances, roles, or team members for whom it may not be as effective. Examples include:

  • Limited experience and exposure
  • Being accustomed to working on only small pieces of functionality
  • Having a project mindset, with a focus on completing tasks within a set time frame, rather than understanding the end objectives
  • Dependence on digital tools that increase productivity but stifle creativity
  • Lack of a complete system overview
  • Being overwhelmed with workload
  • Different work locations or product usage locations
  • Inadequate skillsets among team members

Developing a product mindset

The adoption of agile product development nurtures a product mindset. In this approach, product owners and managers prioritize faster delivery and quick feedback on epics, features and user stories. All the functions within agile development, when implemented effectively, result in developing a product mindset.

Nonetheless, various agile practices have proven to be especially effective in fostering a product mindset, such as:

  • Planning poker
  • System demo
  • Ceremonies with product owners and product managers
  • Backlog refinement
  • Test-driven development
  • Behaviour-driven development

One of the key enablers of agile product development is ’transparency’. If transparency levels aren’t adequately maintained amongst stakeholders, the ecosystem becomes rigid and impacts agility. By having a high level of product mindset among team members, it is possible to prevent such situations.

Cross-functional teams trained in agile development systems possess necessary skills to boost the level of product mindset and technical agility within the team.

Additionally, promoting a culture of continuous learning through hackathons and innovation events can further enhance the product mindset throughout the organization. Possessing a robust product mindset is essential for the agile product development process and for creating exceptional products.

Conclusion

Overall, a product mindset can enable organizations to be more agile, responsive, outcome-driven and customer-focused, leading to efficient project management and effective deliverables.

Significance of TPRM In enterprise risk maneuver

Enterprises continuously strive to engage in successful business partnerships with third-party companies and vendors to drive growth, expansion and also to speed up operations. This rapid growth strategy however exposes the company to growing risk of being breached by sometimes ‘unreliable third-party entities’. The continuous interactions and exchange of information that such arrangements entail open the possibility of potential data breaches thus putting the organization to great risk.  

One of the critical measures that enterprises must necessitate within themselves at a nascent level is to have stringent metrics while considering engaging with a third-party vendor. Organizations must prioritize an effective third party risk management framework aka TPRM framework to mitigate undue risks and excessive costs incurred due to untoward incidents. While the relationships are important and critical for the businesses to thrive, yet the associated factors such as cyber risks, loss of reputation or even regulatory mishaps need to be factored with utmost priority. The answer therefore lies not in limiting engagement with third parties, but in ensuring effective management of third-party risks so that business is conducted with utmost trust and confidence.

As per a study by Markets and Markets, the size of the global Third-Party Risk management (TPRM) market is expected to grow from USD 3.2 billion in 2019 to USD 6.4 billion by 2024. The CAGR growth rate is 15.9% higher in the forecasted period.  This is a clear indication that organizations dependent on third-party vendors are seriously considering the use of TPRM for their enterprises.

The purpose of this blog is to provide a foundational understanding of the Third-Party Risk Management (TPRM) realm and to emphasize the significance of selecting the right security expert to provide adequate measures.  Let us begin with the fundamentals to get a comprehensive grasp of the subject in discussion.

What is TPRM?

Third Party Risk Management, also abbreviated as TPRM, is an ongoing process of discovering, assessing, and controlling third-party risks related to organization’s data, their operations, financial information, or any type of confidential exchange.

Organizations collaborate with third-party vendors for two major reasons. The first is to support their business operations, and the second is to leverage additional benefits from these third-party vendors. This engagement massively contributes to reducing costs, focusing more on core business functions, and enabling best-in-class service from experts of relevant industries.

On the flipside, this kind of arrangement exposes organizations to unwanted data breaches along with cybersecurity & regulatory compliance risks that have the propensity to disrupt your business operations and damage reputation. Verifying third parties’ reliability requires due diligence.

Gartner’s prediction concurs with the above sentiment. By 2025, 60% of organizations will use cybersecurity risk as a significant determinant in conducting third-party transactions and business engagements.  This data insight is an eye opener to why one needs an TPRM program in the first place.

Now that the basis of the TPRM program is set, let us understand how a third-party risk management system works? What is the process and how does the typical workflow function?

Typical TPRM Workflow

To start with, organizations must identify all the vendors and begin categorizing them as per enterprise dependencies for sustainability and on the basis of critical business operations. Once you have identified the potential of the third party you want to associate with, start following these steps.

  1. Review contracts with complete due diligence.
  2. Identify the required people, process, and the technology controls to be adhered by the vendor.
  3. Perform a detailed third-party risk assessment (TPRA) to ensure that the underlying risks are mitigated and are within the acceptable risk levels.
  4. Ensure there is a remediation plan in place to timely mitigate vendor risks.

As an example, here is an illustration of a typical TPRM workflow we at InfoVision follow for our TPRM services:

Devising a Cyber Due Diligence Plan:  The  InfoVision Way

It is a daunting task for enterprises to find the best and the right security experts for their TPRM initiative. Other than the process of finding the right fit, there are additional challenges such as,

  • efficient fit-for-purpose process and procedures
  • identification and mapping of various regulatory compliance/controls with respect to available vendors, and
  • Automated TPRM programs

These are some of the major setbacks for security leaders. An effective TPRM program can improve overall visibility & results, easily validate, review third-party artifacts, take measures for potential risks and improve on efficiency.

InfoVision employs a distinctive strategy to strengthen its defense against potential hazards that may arise from third-party companies or vendors. This is achieved by thoroughly screening vendors through TPRA and selecting the appropriate ones. As the process progresses, all issues associated with third-party vendors are documented and evidence is collected. This evidence is then reviewed and analyzed, and a response is provided to the third-party provider, ensuring that they rectify the identified risks within the agreed-upon timeframe. The discrepancies are presented to our client’s leadership in the form of reports and dashboards utilizing automated GRC tools.

Here are the distinguishing factors and added benefits of our TPRM program.:

  1. Customed dashboards and various compliance reporting templates help our client CISOs to take better informed decisions.
  2. A dedicated security advocate to assist client vendors in redefining their people, process, and technology controls to improve their compliance and security posture.
  3. Our approach optimizes the use of security experts and promotes left shifting by employing GRC tools for automation purposes.
  4. We create and implement a tailored TPRM program process, procedures, and framework that aligns with the unique needs of our clients.

In this way, InfoVision helps its clients to accelerate their Vendor Risk Management programs, thus limiting human errors, ensuring timely risk identification, providing for scalability with no additional resources, and improved governance & compliance.

How can a TPRM program be a game changer?

As per KPMG, Six out of ten people (61%) think TPRM is undervalued given its crucial function for the organization. Businesses might sustain greater expenditures and gain new efficiencies when it comes to resilient operations, cyber security and fraud detection if they have a strong TPRM program rather than just focusing on its individual components.

Additionally, leveraging automation through third-party risk management frameworks and technologies is a viable alternative when looking for ways to advance third-party security.

InfoVision is proficient with a dedicated team of specialized Enterprise Cybersecurity & Risk Services (ECRS) practitioners and experts who can help you with a ‘fit-to-purpose’ for your business. We can help your security leadership teams manage the TPRM program and devise your overall security strategy. Our unique approach leverages a blend of technology, industry experience, security domain expertise and fine-tuned processes to help you conduct your business with trust and confidence.

Raise your queries and speak to our specialists by writing to us at info.ecrs@infovision.com.

Blockchain is the next big step in automated border control systems

For decades now automated border control systems, popularly known as eGates, have been used at the border crossing points primarily to speed up clearance process. A significant benefit that the system delivers is that it quickly scans and verifies identities of the travellers entering the borders, thus restricting illegal entry into countries.  This proves effective in preventing terrorism and human trafficking to a large extent. The system is geared to verify the travellers’ identities through biometric parameters without Physical or human intervention.   This ultimately enables organization and management of people traversing the borders more effectively and efficiently.

According to Mordor Intelligence, “the automated border control market is expected to witness a CAGR of 16.25% over the forecast period (2021-2026).  The increasing threat of terrorist attacks and the security standards that have been set by the international authorities that include IATA, ICAO and ACI are among the two most influential drivers sustaining the market.”

Machine Readable Formats Make It Possible

Governments have made significant investments and implemented multiple innovative solutions to strengthen border security control. A machine-readable passport (MRP) is a foundational pillar of cross-border security. MRP is a type of machine-readable travel document (MRTD) that has identification data encoded in optical character recognition (OCR) format.

MRTD is an official travel document, standardized across the globe, issued by a state or organization for international travellers. This OCR-based document makes it easier for automated systems to scan the travel document and its interoperability allows authorities to check visitors against a criminal database to restrict entry.

MRTD contains a standardized format of various identification details of the traveller that includes a picture or a digital image along with mandatory and optional identification elements. The mandatory elements apart from the photograph, are reflected in a two or three-line machine readable zone (MRZ). The MRTD standards are defined in the ICAO 9303 document published by the International Civil Aviation Organization (ICAO) and have been implemented by many countries around the world. MRTD and biometric passports have significantly improved the efficiency of the border control management system.

A widely used MRTD is the machine readable passport (MRP) and each MRP contains various biometric elements to identify the right owner. These elements include retina scans, fingerprints and facial recognition. It also has ICAO-specified features including MRZ and other text attributes that are visible on the first page of the passport.

The key issue with the current border control management system is its centralization.  The systems are controlled by a single entity. As a result, data is not readily shared among different law enforcement agencies. This makes it rather difficult to track down suspected individuals. Also, currently there are no systems available to immediately track, control, blacklist or revoke a suspected passport.

Blockchain for border security controls

Blockchain technology is proposed as an effective solution to mitigate the existing border control management challenges. A list of blacklisted or flagged travel documents can be stored and maintained in a smart contract (one of the features of Blockchain technology). This list can be updated as and when required. Any incremental change made to this list will immediately be visible to all law enforcement agencies and border control points, thus enabling immediate control over the movement of a suspected traveller.

Arguably, traditional mechanisms like PKIs and P2P networks can also be used for tracking down suspected travellers. However, it would fall short of what Blockchain can provide.

Blockchain can simplify the whole system without complex networks and PKI setups, and therefore result in significant cost reduction. Border control management system backed by Blockchain can provide cryptographically guaranteed immutability that helps in auditing and preventing fraudulent activity. A complete database with all travel documents perhaps cannot be effectively maintained or stored in a blockchain network currently due to scalability issues. However, a distributed backend database such as BigChainDB, interplanetary file system (IPFS), or Swarm can be a good substitute.

How to Make It Work?

A hash of the travel document that has the biometric ID of an individual can be stored in a simple smart contract and another hash of the same document can be used to refer to detailed information that is available on the distributed file system such as IPFS. This ensures that when a travel document is blacklisted anywhere on the network, that information will be available immediately with the cryptographic guarantee of its authenticity and integrity throughout the distributed ledger. This functionality can effectively support anti-terrorism activities, thus playing a vital role in the homeland security function of a government.

A smart contract will have a defined array for storing individual details, their respective biometric records and other critical details of identification. These identifying details can be a hash of the MRZ of the passport or travel document that is concatenated with the biometric record derived from the RFID chip. A simple boolean field can be used to identify blacklisted passports. Further detailed biometric verification can be done by traditional systems after the traveller passes all the checks done by the blockchain solution. Eventually, when the decision is finally made regarding the permission for entry status of the traveller, it can then be propagated back to the blockchain network to inform all the stakeholders on the network immediately.

Building blocks of Blockchain-based control system

A high-level approach to building a blockchain-based border control system is illustrated below, for quick understanding:

Here, the passport is initially inspected twice: once with a page scanner and once with an RFID reader. The page scanner reads the data on the page and extracts machine-readable information along with a hash of the biometric data stored in the RFID chip. A live photo and retina scan of the traveller are taken at the time and stored in the database. This information is then passed on to the blockchain network for further checks. A smart contract from the blockchain network will verify the legitimacy of the travel document in two steps. It will first scan through its list of blacklisted passports and then it would request more data from the backend IPFS database to complete the check for flagged passport holders. Note that biometric data such as photo or retina scans are not directly stored on the blockchain network. Instead, only a reference to this data from the backend (IPFS or BigChainDB) is stored in the blockchain.             

Conclusion

To strengthen the border security control system, the gate is programmed to allow access to the traveller only if the travel document data passes two main checks. Firstly, the data presented in the passport should match that of the IPFS/Bigchain DB files. Secondly, it should pass the smart contract logical check. If the data fails to meet one or both checks, access to the gate will be denied and respective authorities will be alerted. After the verification process, the information is propagated throughout the blockchain network and is instantly available to all stakeholders on border control. These stakeholders can be a worldwide consortium of homeland security departments of various nations.

To know more, talk to our Blockchain expert – Dr. Arvind Deenadayalan, Global Head of Blockchain Practice

10 Important Features to Watch out for in Salesforce Admin Spring ’23 Release

The Salesforce Spring ‘23 release is around the corner. This release seems to be packed with plenty of great news. While few of the features that were scheduled for a later release seem to have been preponed, there is other amazing stuff that is visible in the beta version.  Overall the bag of goodies seems to be loaded and the anticipation is high!

As a tradition, Salesforce makes major releases three times a year. These releases are awaited with lot of expectation as they are intended to offer new features and technology updates that the admins can leverage.  I feel that the Spring ‘23 release will definitely help drive productivity and boost security.  While building great user experience (UX) seems to have been factored, I see lots of opportunities for building fantastic apps for both internal and external stakeholders.

This article focuses on the 10 most important features of the Salesforce Spring’23 release for Admins, according to me.

1. Migrate Process Builder to Flow

In Dreamforce 21 Salesforce announced the retirement of Workflow and Process Builder and scheduled the release of migration tools to Flow.  As of now, only migration of workflow is available.

With the release of Spring 23, salesforce is releasing the updated Migrate to Flow tool that will support the migration of Process Builder. You can now use the tool to convert Process Builder processes to Flows.

The updated Migrate to Flow tool can help your transition to Flow Builder. In addition to workflow rules, you can now use the Migrate to Flow tool to convert Process Builder processes into flows. Flows can do everything that processes can do and more.

From Setup, in the Quick Find box, enter Migrate to Flow, and then select Migrate to Flow. On the Migrate to Flow page, select the process that you want to convert into a flow, and then click Migrate to Flow. Then select the criteria that you want to migrate to the flow. After the process is migrated, you can test the flow in Flow Builder. Test the new flow, and if everything works as expected, activate the flow and deactivate the process you converted.

Migrate Process Builder to Flow

2. Build Custom Forecast Pages with the Lightning App Builder

Forecasting in Sales Cloud has seen a good number of updates over the last few releases. In Spring ‘23 you can design and build custom forecast pages using the Lightning App Builder.

As a result of the ease of building flexipages that the

gives, you can build pages using standard and custom components. Your page designs can evolve as fast as your sales processes. You can create and assign different layouts for different users.

Lightning App Builder

3. Collaborate on Complex Deals with Opportunity Product Splits

In complex business transactions or negotiations, generally there is no single person responsible for the closure of the deal, as it involves an entire team. Splitting the opportunity allows one to track credit across multiple team members. Earlier such splits were possible with Opportunity only. Now with the Spring’23 release, the split at Product level will also be available.

4. Importing Contacts and Leads with a Guided Experience

With new Guided Experience, when users select to import contacts or leads, they are now presented with multiple options to import data, depending on their assigned permissions.

Importing Contacts and Leads with a Guided Experience

The new wizard provides a simple interface that allows the steps to import a CSV file

5. Dynamic Forms for Leads AND Cases

The Salesforce Dynamic Forms is an early release by salesforce. With Dynamic Forms, now case and lead record pages can be configured to make these more robust. Earlier this capability was available only for account, contact and opportunity record pages.

With the Spring ’23 release, Dynamic Related Lists will include the ability of “View All”.  This link will enable users to see the complete list of related records.

7. Dynamic Actions for Standard Objects

Now Dynamic Actions are available for all standard objects. Earlier it was available only for Account, Case, Contact, Lead and Opportunity.

Dynamic Actions will enable to create intuitive, responsive, and uncluttered pages which display only the actions your users need to see based on the criteria you specify.

Instead of scanning an endless list of actions, users will be presented with a simple choice, relevant to their roles and profiles, or when a record meets some criteria.

8. Track Field History for Activities

Now up to six fields for Task and Event can be tracked when Field history tracking for activities: is turned on.

Field history tracking for activities

9. Picklist Updates

Picklist fields got lot of new features added, like:

  • Clean Up Inactive Picklist Values
  • Bulk Manage Picklist Values
  • Limit the Number of Inactive Picklist Values (Release Update)
  • There are also two new standard picklist fields available on Leads, Contacts, and Person Accounts. Gender Identity and Pronouns are now included as optional fields.
  • Capture Inclusive Data with Gender Identity and Pronouns Fields.

10. Reports and Dashboards

Reports and Dashboards have got many exciting updates.

  • Creating Personalized Report Filters
    You can now create dynamic report filter based on the user’s profile so that the users view records specific to them.
  • Subscribe to More Reports and Dashboards
    In Unlimited Edition org, now Users can subscribe up to 15 reports and 15 Dashboards. Earlier it was restricted to 7.
  • Stay Informed on Dashboard and Report Subscriptions
    You can now create a custom report type to see which reports, dashboards, or other analytic assets users have subscribed.
  • Stay Organized by Adding Reports and Dashboards to Collections
    Now you can use collections to organize the reports and dashboards even if they exist in multiple folders. You can also pin important collections to your home page, hide irrelevant collections, and share collections with others.
  • Focus Your View with More Dashboard Filters
    You can refine and target the dashboard data with additional filters on Lightning dashboards. Now no need to maintaining separate versions of the same dashboard for different business units and regions with only three filters. This is in beta only.

Conclusion

Salesforce Spring’23 release, I feel, will certainly not disappoint the administrators as a lot of ‘most awaited’ features seem to have made it to this release. Few features that were seen in the beta came as a pleasant surprise. I would definitely encourage you to read the release notes so that you can identify the features that are important to you.

We, at InfoVision, have a dedicated Salesforce Center of Excellence that focuses on innovation – through which we develop new salesforce competencies.  We leverage lots of tools, processes and accelerators to build industry-specific use cases that pertain to global standards.  We therefore follow each and every release that Salesforce makes with lots of interest and curiosity.  The releases create opportunities for us to innovate and find differentiating ways by which we can solve the unmet needs of our customers.

I am happy to have more in-depth discussions on any aspect of Salesforce with those of you who are interested.

Building customer loyalty in retail

Loyal customers in retail make more repeat purchases, shop more and refer more. Organizations in the US, on average, spend 4 to 6 times more in acquiring new customers as compared to keeping old customers. From a business perspective, especially in today’s competitive landscape where customers have multiple options to choose from, customer retention is as important as customer acquisition.  This is why customer loyalty needs to be carefully studied and planned in order to maximize the value of a loyal customer. Besides, a lot of retail industry studies conclude that only the quality of product does not suffice for the modern buyers.  Customer service and personalized recommendations are a definite plus that can propel the pendulum swing in a given direction.

Loyalty is Valuable Even When Partial

A returning customer who repeatedly prefers to buy from one brand over another is considered as a loyal customer. Retail loyalty is different from other Brand loyalty in terms of frequency of purchase, range of products and fierce competition. Due to these reasons 100% customer loyalty is quite unlikely in retail. This does not diminish the value of a loyal customer for a retailer. Several factors play a role in influencing the customer to favor a particular brand. Convenience while shopping, satisfaction with the range of products, attractive offers and familiarity with the brand, are some such factors.

Customer Loyalty Programs

Building loyalty programs is an effective way to nudge customers to prefer your brand. 50% of US consumers use a loyalty card or app for the purchase of fuel. Around 71% of retailers offer some kind of loyalty program.  For loyalty programs to really work without increasing customer friction, they need to be contextual and timely. Else, they run the risk of backfiring. Advanced technologies and innovative solutions for loyalty programs help retailers with deeper customer insights in real-time and thus make hyper-personalization possible.

According to a Gartner Insight, “customer loyalty can be increased by performing value enhancement activities that leaves customer feeling like they can use the product better and are more confident in their purchase decision.”

Challenges with the Traditional Approach to Loyalty Programs

While loyalty programs are not a new concept, the conventional methods haven’t always made any significant impact on the retailers or their customers. Worse still, some of them backfire and are seen as a nuisance by some customers. Here are some challenges that modern, technology-backed loyalty programs should ensure that they get right.

  • Integration with existing systems
    As retailers try to move towards omnichannel presence and digitally transform every aspect of their operations, integrating loyalty programs with these systems has not been straightforward. Without integrating a loyalty program, it is very difficult to harness the full potential of loyal customers. Retailers therefore need to carefully select their loyalty platform. Whether to use an off-the-shelf solution from a provider or develop one in-house.
  • Analytics
    Insights on customer behavior and their response to promotional offers are needed to understand how well the loyalty program is being received. While retailers may have this data, it is usually scattered and not systematic enough to run analytics. Data insights are also crucial to ascertain if the value created for the retailer is greater than the value delivered to the customer. Any good loyalty program solution should have this capability built in.
  • Impersonal OffersGeneric loyalty programs are seldom relevant. Personalized rewards are much more meaningful to customers. There are several brands that fail to leverage their customer data effectively to bring the desired personalization in offers and promotions. Offering a promotion on coffee to a customer who usually purchases tea is what needs to be avoided.
  • Transaction-only Focus
    Most traditional approaches have a narrow view of customer loyalty which is only linked to purchases. However, every time a customer writes a review or refers your brand to others, they are displaying loyalty and can be treated as a trigger for rewards.
  • Not Simple Enough
    With everything that goes on, the last thing customers want is hard-to-understand and difficult-to-keep-track-of loyalty programs. Similarly, if the process to redeem points is not simple enough, customers may not bother themselves and might actually be put off. Quite the opposite of what the primary intent of any loyalty program is. When technology is being used to make the buying experience as simple as possible, why shouldn’t the same principle apply to loyalty programs?
  • Short-term Redundant Offers
    Some users may like to collect rewards in the form of redeemable points, while some may prefer membership coupons or event passes. Repetitive offers can become redundant and irrelevant. Similarly, short-term offers do not build loyalty in the long term. A good mix of offers that cover a wider range and period is more appreciated by customers.

Digital Wallets and Shared Loyalty Programs

Retailers with mature loyalty programs are now looking to offer shared loyalty programs in collaboration with other brands. This proves to be more cost-efficient for the brands and more beneficial to the users. Also known as coalition loyalty, this is done with the help of digital wallets and extensive partnerships with unrelated brands. For example, retailers may partner with fuel providers to extend the scope of their rewards and further strengthen customer loyalty. Brands with different purchase cycles also stand to gain from each other’s customer loyalty. Digital wallets or mobile wallets have given rise to a new kind of loyalty economy, where consumers can track their reward points from various brands in one location and actually use them at POS counters. With everything now being on mobile, users no longer need to keep another physical loyalty card handy.

Gamification is another new way to build engaging loyalty programs. It helps in engaging the customers, creating a sense of community or accomplishment and generating excitement for the brand.  Such programs need to be highly creative and leverage the latest technologies.

The bottom line of any loyalty program is to generate profits and not to become a cost center. Knowing your customer preferences and having that data handy is the only way to create personalized and targeted promotions. Identifying the right channel (POS, POPs, SMS, in-app), the right offer and the right time may look simple but has clearly been a challenge for retailers. Partnering with an experienced loyalty solution provider like InfoVision can help to overcome most of these challenges.  The team of technology and retail experts at InfoVision has successfully implemented a combination mobile fuel payment system, digital wallet, mobile checkout and customer rewards system for a leading multinational retailer.

Want to talk to our expert?  Please write to us at info@infovision.com

Vector-based Search: An Efficient Technique for Unstructured Duplicate Data Detection

Organizations today are driven by a competitive landscape to make insights-led decisions at speed and scale.  And, data is at the core here.  Capturing, storing and analyzing large volumes of data in a proper way has become a business necessity. Analyst firm IDC predicts that the global creation and replication of data will reach 181 zettabytes in 2025. However, almost 80% of that data will be unstructured and much less will be analyzed and stored.

A single user or organization may collect large amounts of data in multiple formats such as images, documents, audio files, and so on, that consume significantly large storage space. Most storage applications use a predefined folder structure and give a unique file name to all data that is stored. This unique file name system of applications enables the same file to exist under different names. This makes it rather difficult to identify duplicate data without checking its content.

This blog focuses on the challenges associated with data duplication in the database and the detection of the same in unstructured folder directories.

The complications of unstructured data

Unstructured data is defined as data that lacks a predefined data model or that cannot be stored in relational databases. According to a report, 80% to 90% of the world’s data is unstructured, the majority of which has been created in the last couple of years. The unstructured data is growing at a rate of 55%-65% every year. Unstructured data may contain large amounts of duplicate data, limiting enterprises’ ability to analyze their data.

Here are a few issues with unstructured data (duplicate data in particular) and its impact on any system and its efficiency:

  • Increase in storage requirements: Higher the duplicate data, more the storage requirements. This increases the operating costs for applications substantially.
  • Large number of data files: This significantly increases the response time for every type of search function.
  • Delays in migration: Larger duration of time is required for migrating data from one storage facility to another.
  • Difficulty in eliminating duplicates: It becomes more difficult to remove duplicate files when the scalability of the system increases.

Redundant data creates disarray in the system. For that reason, it becomes imperative for organizations to identify and eliminate duplicate files. A clean database free of duplicate data avoids unnecessary computation requirements and improves efficiency.

Challenges in duplicate record detection

Detecting duplicate files by search functions using file characteristics like name, size, type and others, may seem to be the easiest method. However, it might not prove to be the most efficient method, especially if the data is on large scale. Here’s why:

  • Searching with file names: Most of the applications use unique file names to store media files. This makes the search difficult because the same file can be under different names. Identification of duplicate data is not possible unless the content is examined.
  • Search based on content: As searching with file names isn’t suitable for applications, a search based on content appears to be the next option. However, if we are dealing with a large document or pdf with multiple pages, this is not a feasible solution either. It will not only have high latency but will also be a computationally expensive task.
  • Search based on types and formats: Media files can be of different types like images, video, audio and so on. Each type of media file can be stored in multiple formats. For instance, an audio file can be saved as .wav, .mp3, AAC or others. The file structure and encoding for each format will be different, hence making the detection of duplicate files difficult.

The proposed solution

A suitable solution to detect duplicate files must address the complications associated with dealing with large volumes of data, multiple media formats and low latency. If each file were to be converted into multi-dimensional vectors and fed as inputs to the nearest neighbor’s algorithm, one would get the top 5-10 possible duplicate copies of the file. Once converted into vector files, duplicate data can be easily identified as the difference in distance of the respective dimensions of duplicate files will be almost indistinguishable.

Here’s how different types of files can be converted to multi-dimensional vectors.

  1. Image files: Images are multi-dimensional arrays that have multiple pixels. Each pixel has three values – red, green and blue. When passed through a pre-trained convolution neural network, the images or a video frame get converted into vectors. A convolution neural network is a deep learning architecture, specifically designed to work with image inputs. Many standard architectures like VGG16, ResNet, MobileNet, AlexNet and others are proven to be very efficient in prediction based on inputs. These architectures are trained on large standard datasets like ImageNet with classification layers at the top.

    Represented below is a very simple sample convolution neural network for reference:
  1. The required images are fed into multiple convolution layers as inputs. Convolution layers are trained to identify underlying patterns from image inputs. Each convolution layer has its own set of filters that multiplies the pixels of the input image. The pooling layer takes the average of the total pixels and reduces the image size as it passes on to the next step in the network. The flatten layer collects the input from the pooling layers and gives out the vector form for the images.
     
  2. Text Files: To convert the text files into vectors, the words that comprise that particular file are used. Words are nothing but a combination of ASCII codes of characters. However, there is no representation available for a complete word. In such cases, pre-trained word vectors such as Word2Vec or Glove vectors can be used. Pre-trained word vectors are obtained after training a deep-learning model such as the skip-gram model on large text data. More details on this skip-gram model are available in the TensorFlow documentation. The output vector dimension will change with respect to the chosen pre-trained word representation model.

    To convert a text document with multiple words where the number of words is not fixed, Average Word2Vec representation can be used on the complete document. The calculation of Average Word2Vec vectors is done using the formula below:
  1. This solution can be made more feasible by adding a 36-dimensional (26 alphabets + 10 digits) vector as an extension to the final representation of the text file. This becomes efficient in cases when two text files have the same characters but in different sequences.
  2. PDF files: PDF files usually contain texts, images or a mix of both. Therefore, to make a more inclusive solution, vector conversion for both texts and images are programmed in. The approaches discussed earlier to convert text and images into vectors is combined here.

    First, to convert the text into a vector, it needs to be extracted from the PDF file and then passed through a similar pipeline as discussed before. Similarly, to convert images to vectors, each page in a PDF is considered as an image and is passed through a pre-trained convolution neural network as discussed before. A PDF file can have multiple pages and to include this aspect, the average of all page vectors is taken to get the final representation.
     
  3. Audio files: Audio files stored in .wav or .mp3 formats are sampled values of audio levels. Audio signals are analogue and to store them digitally, it undergoes the process of sampling. Sampling is a process where an analogue-to-digital converter captures sound waves from audio files at regular intervals of time (known as samples) and stores them. The sampling rate may vary according to the applications used. Therefore, while converting audio files to vectors, a fixed resampling is used to get standard sampling rates.

    Another difficulty while converting audio files into vectors is that the lengths of the audio files may vary. To solve this, a fixed-length vector with padding (adding zeros at the end or start) or trimming (trimming the vector to a fixed length) can be added, depending on the audio length.

Finding duplicates with vector representations

With vector representations for all types of files, it now becomes easier to find duplicate data based on the difference in distance of respective dimensions. As previously stated, detection by comparing each vector may not be an efficient method as it can increase latency. Therefore, a more efficient method with lower latency is to use the nearest neighbors algorithm.

This algorithm takes vectors as inputs and computes the Euclidean distance or cosine distance between the respective dimensions of all the possible vectors. The files with the shortest distance between their respective vector dimensions are likely duplicates.

Finding Euclidean distance may take longer (O(n^2) latency computation), but the optimized Sci-Kit Learn implementation with the integration of KDTrees reduces the computational time (brings down latency by O(n(k+log(n))). Note: k is the dimension of the input vector.

Please note that different processing pipelines must be used when converting images, texts, PDFs, and audio files into vectors. This is to ensure that the scale of these vectors is the same. Since the nearest neighbour’s algorithm is a distance-based algorithm, we may not get correct results if the vectors are in different scales. For instance, one vector’s values can vary from 0 to 1 while another vector’s values can vary from 100-200. In this case, irrespective of the distance, the second vector will take precedence.

The nearest neighbour algorithm also tells us how similar the files are (lesser the distance between dimensions, more similar the files are). Each file vector has to be scaled within a standard range to have a uniform distance measure. This can be done by using a pre-processing technique such as StandardScaler from Sci-kit Learn. After the pre-processing, the nearest neighbour algorithm can be applied to get the nearest vector for each file. Since the Euclidean distances are calculated along with the nearest neighbour vectors, a distance threshold can be applied to filter out less probable duplicate data.

Conclusion

Data duplication in any system will impact its performance and demand unnecessary infrastructure requirements. Duplicate record detection based on file characteristics is not a recommended method as it might require an examination of the content for accurate results. Vector-based search is a more efficient technique for duplicate record detection. Successful implementation of this methodology can help identify the most and least probable duplicate files in unstructured data storage systems.

Eight Quick Tips to Choose the Best Public Cloud Provider

Cloud adoption is all pervasive today – across industries, businesses and geographies.

One of the drivers for this high rate of adoption is the fact that cloud services support digital innovation by providing scalable and cost-effective solutions for software infrastructure, storage, security, connectivity and other specialised services. Cloud migration for enterprises of any size is therefore not a question of ‘if’, but of ‘when’ and more specifically, ‘how’.

Cloud Migration: The Big Question

Whether your organization is thinking of migrating its on-premise setup to the cloud or you are a consulting company that needs to offer a recommendation to the client; the biggest and the most fundamental challenge is to identify the right cloud service provider. The obvious top three names in this space are Amazon Web Services (AWS), Microsoft Azure, and Google Compute Engine(GCP). There are others too, but these three are the most mature and hence the most sought after providers. An important aspect to consider here is that not every provider suits every kind of requirement.  While the spectrum of offerings may be wide, yet the choice has to pivot around the business needs that drive the cloud migration.

There are several instances where the companies who migrated their workload to the cloud realized much later in the journey that the chosen service provider was not the right one. Some of them have had to go back to square one and initiate migrating from one cloud to another. A deeper knowledge and understanding of cloud service providers therefore is required to make a wise decision. If the expertise on this subject is not available in-house, consulting cloud specialists for their expert opinion and recommendation is certainly a good idea.

The cloud system infrastructure services (IaaS) segment is forecast to grow by 30.5 percent in 2023 compared to 2022. The public cloud services market as a whole is expected to grow by 21.3 percent, with only cloud business process services (BPaaS) experiencing single digit annual growth rates.
Source Statista

How to Select a Cloud Service Provider?

Now that we have established the need to choose wisely, let us look at the aspects that you should consider before selecting a cloud provider.   I bring you a checklist of eight.

1. On-premise Workload Environment

A thorough research and comprehensive knowledge of the on-premise workload environment that needs to be migrated to the cloud is fundamental to making the right decision. The workload environment includes the operating systems, software, network protocols and many other aspects. Below is a reference list of some elements that comprise this environment:

  • Operating systems (Windows or Linux)
  • Open source or license software requirements
  • Public-facing or internal applications
  • Application network port requirements
  • Virtual machine hardening
  • Dependencies on other applications
  • Internal application releases
  • Security concerns
  • Application auto scaling capabilities.

2. The Goal of the Migration

After identifying the workloads to be migrated, the next important fix is to comprehend, ‘what do we want to achieve from this migration?’. The goal of the migration needs to be clearly defined and communicated correctly within the team and the enterprise as a whole. Everyone in the organization should know the benefits and the reasons why they are migrating the workload to the cloud. Below are a few reasons and benefits that organizations could consider:

  • Accessibility for their customers
  • On-premises hardware expiry
  • Acquisition of additional hardware
  • Organization expansion and subsequent auto-scaling
  • Cost optimization
  • 24 hours’ availability.

3. Motivated and inspired Team

Involving a well-rounded team in the migration plan and taking suggestions from them is often overlooked. Undoubtedly, an inspired and motivated team can be the secret ingredient and a big asset in the long migration journey. Here’s how you can inspire and motivate your team,

  • Involve them in the migration discussions right from the start
  • Take suggestions from them
  • Discuss the goals, objectives and challenges
  • Give appreciation to those who deserve
  • Provide the required training.

4. Costing

It is important to know the cost benefits of each provider before arriving at the final decision.  Compare the costs for the long term in a systematic and objective way. Here are some pointers on how to go about it:

  • Create a Technical Oversight Committee (TOC) for 3 years to monitor and predict expenses
  • Understand the cost for reserved, spot, and on-demand instances for services
  • Compare the major expense component
  • Do a region-wise comparison
  • Compare the cost with Rehost, Refactor, Re-Architect, and Re-build strategy approaches
  • Ask for discounts on specific types of resources that you would be deploying more and the regions where your workloads will be deployed
  • Ask to omit or reduce the software and VM license cost for development and testing environments
  • Understand the cost of hiring the resources for different cloud providers.

5. Perform Proof of Concepts  

Select an appropriate workload to perform POC (Proof of Concepts) on all the cloud service providers you are considering. It will give you a correct overview of your environment and help you to visualize the actual results. You can also get an idea of whether your solutions will work on the cloud or not. Below are the points you can consider after POCs,

  • Feasibility of the solution
  • Architectural approach
  • Cloud readiness and easiness
  • DevOps and Automation
  • Cloud running cost
  • Software availability
  • Performance on each cloud
  • Scalability of each cloud.

6. Cloud Provider’s Resources Availability

Till this step you may have done everything right. From setting up the goal to comparing costs, motivating your team, and completing the PoCs.  You may have even narrowed down your search to a chosen cloud service provider. However before starting the migration process, you will need to recruit specialist resources. That is when you may realize that the skillset for this provider is not available in the market and now your entire plan is at a risk. This problem is usually faced by service provider companies. Finding the correct candidate for your cloud migration project may feel like treasure hunting. If the specific cloud technology is rare, your company will need to spend more time and money to find the correct candidates. The availability of technical resources in the market for cloud service providers is therefore a critical aspect to factor. Some points to note,

  • AWS is a more mature cloud service provider and so AWS skilled resources are available easily.
  • Azure is now slowly gaining traction. So resources with Azure skillset are also available but not as easily.
  • GCP is still in its nascent stage. Technical skills for the GCP cloud are not easy to find.
  • Oracle, IBM and Alibaba are much behind in public cloud penetration and hence finding relevant resources will be very tough.

It goes without saying that cost of hiring resources will go up if their availability is low.

7. Community and Cloud Service Provider’s Support

Before selecting the cloud providers check their service plans, service support time and methods. Do make note of any add-on facilities. Also, check the community support available for the providers. Below are some points to consider along these lines,

  • Cost of support
  • Types of support (Email, Phone, Chat Video conference, etc.)
  • SLA time
  • Marketplace resource support
  • Service support.

8. Software Availability

If you are using Rehost migration approach and you have some specific requirements for the software and its version, then it is worth checking its availability in the cloud service marketplace. Legacy applications usually face this issue of availability in the cloud environment.

Gartner predicts that by 2026, public cloud spending will be more than 45% of all enterprise IT spending. It was below 17% in 2021. The future is definitely on the cloud. For a smooth ride into the future, find a public cloud provider that works best for your organization.

How blockchain can save billions for the media industry

The global entertainment and media industry revenue today is worth $2.1 trillion. This trillion-dollar industry is often subjected to multiple risks associated with content distribution, rights management, and royalty payments to artists. Illegal streaming and downloading of content have resulted in multi-billion dollar revenue losses. According to another report, the industry is expected to lose around $51.6 billion due to copyright piracy in 2022.

Very often, pirated copies of digital music are made quite effortlessly and most of the attempts to prevent it from piracy have been vanquished. This in turn affects the royalty payments that are paid to the creators for the rights to use or publish their content. Also, payments are not always guaranteed and are based on traditional airtime figures. There is no effective way to control content distribution.

That was, until now.

Regain your control, creators!

With the revolution of blockchain, the industry’s incessant problems will now have apposite solutions. The use of blockchain technology can be extremely effective in solving problems like copy protection and royalty programs. The technology connects consumers, artists and all other stakeholders in the industry and provides full transparency over the distribution process.

Blockchain provides a network where every digital music is cryptographically encrypted to ensure its access only to paid customers. This payment mechanism for accessing the content is controlled by a smart contract, thus eliminating the need for a centralized authority. The payments are made automatically based on the logic embedded within these smart contracts and the permissions prescribed for the number of downloads. All transactions in the blockchain network are recorded and immutable, thus making the process completely transparent and accessible to all the stakeholders. This prevents illegal copying of digital music files altogether, consequently preserving the copyrights for creators.

Blockchain – connecting content & creators

The cryptographic feature of blockchain technology enables the creator to be tied to his/her content to avoid plagiarism. For instance, a digital music file on a blockchain network contains the owner’s information and the time stamp.  Both of these are immutable and traceable. The legal owners of the content are cryptographically linked to it. This ownership cannot be transferred to another user unless the original owner grants permission. Copyright transfers are easily managed and traced with blockchain as all transactions are recorded and cannot be tampered with. Smart contracts can then control all the distribution and payment to the concerned parties.

Blockchain technology provides owners of intellectual property (IP) with tools to better monitor and protect their work. Preventing plagiarism of any previously copyrighted content is just one of the many applications the blockchain technology has to offer. Blockchain for businesses will reduce the enterprises’ dependency on multiple security tools and has the potential to create high levels of trust for any transactions, thus enabling leaders to focus on better marketing strategies. This trust factor combined with the ease of use is driving the demand for blockchain amongst enterprises.

Blockchain for media & entertainment industry

The media industry is on the front lines of the digital revolution. By adopting multiple emerging technologies, the industry is enriching its user experience through data-driven insights that in turn build a strong brand value and engaging social media presence.

The media industry often faces the challenge of controlling ownership and distribution. Web3 applications allow creators to effectively monetize their art. Creators can also set up an NFT marketplace, apply smart contracts to profit from future sales and reward loyal fans who invest early in their success.

Here are some ways by which the media industry can mitigate the existing challenges:

Asset Management Security

The media industry has not been able to control the digital sharing of content effectively. Blockchain applications allow creators to verify identities, limit sharing and retain ownership of digital assets.

New Revenue Streams

Blockchain enables creators to sell exclusive assets as NFTs and retain a portion of the profits from these assets as they are traded further in the future. This accountability also affects streaming, where pay-as-you-go models empower consumers and reward artists directly.

Fan Connections

If an artist wishes to sell shares of their career at a launch party, it allows his/her fans to be part of their journey and reward the fans with loyalty as their career expands. It is similar to a fan club where multiple parties are involved to ensure revenue remains inside the creators’ community.

Blockchain in Television

Blockchain can vet digital assets and eliminate fake videos before they make it to the news. Consumers can pick the channels they want and pay only for the content they wish to consume. NFTs can create exclusivity in a streaming world where everything is available all the time.

Blockchain in Film Distribution

Blockchain can address challenges associated with identity, ownership and copyright. For instance, scenarios where an actor wants to be paid depending on the success of the movie, a studio wants to accurately price ads and product placements or when creators want to control access and ownership of work.

Blockchain in Music

The legacy of Napster lives on in the peer-to-peer sharing of music files to this day, and an entire industry had to adapt. But challenges with payments, stream tracking and payment distribution persist. Blockchain applications like smart contracts, NFTs and micropayments can be the apt solution that this industry needs.

Unique features of Blockchain

Blockchain is a promising technology backed by its three core strengths – transparency, immutability and impeccable security. Though known for its popularity in the banking sector, blockchain is a futuristic technology that is set to disrupt all verticals with its distinctive applications. Here are a few examples of how blockchain’s unique features can be applied to the media industry.

  • Immutability: Helps with censorship resistance.
  • Security: Orderly data structures which result in a high degree of auditability and reliability.
  • Transparency: Enables the visibility of ledger information across all users.
  • Resistance:  Prevents the alteration of data, eliminates asymmetric information.
  • Invulnerable: Blockchains are also distributed, which means there is no single point of failure or attack vector for hackers or other malicious actors.

Benefits of Blockchain in media and entertainment

The music business is still primarily operated on legacy systems and antiquated business models which were developed at a time when songs were predominantly distributed offline and not released on the internet. Only a few competitors have managed to keep up with digitization, and they now control the streaming business, squeezing off income for artists.

The open and decentralized nature of the public Ethereum platform will allow actors in the entertainment industry to reap the following benefits:

  • Decreased IP infringement
  • Disintermediated content from industry intermediaries
  • Direct monetization of all copywritten assets through smart contracts and p2p micropayments

Digital piracy, fraudulent copies, infringed studio IP and duplication of digital items cost the US film and TV industry an estimated $71 billion annually. Enterprise Ethereum enables artists and creators to digitize the metadata of their unique content. It also manages and stores IP rights on a time-stamped and immutable ledger. Blockchain, with its append-only structure, makes it easier for creators to legally enforce their rights if infringement happens .