Industry

So A Recession Is Coming: Why Business Leaders Should Play Offense

Industry

In response to an economic downturn, most businesses operate by the same playbook. Cut costs, scrap projects, freeze hiring, and lay off employees - all of these are strategies to weather the storm. 

Companies are taking these steps already over the concern of a looming recession. Perhaps after over hiring amid unprecedented success, the tech sector is seeing the largest correction with over 70,000 layoffs at 527 companies so far in 2022. 

We surveyed hundreds of business leaders across industries this week to confirm whether this is happening outside of tech, too. We found that companies are overwhelmingly playing defense. 

  • 70% of companies are actively preparing for a recession 
  • 30% are cutting administrative costs 
  • 29% are freezing hiring
  • 24% are cutting projects
  • 20% are downsizing

Is defense the right play? It turns out that history has repeatedly favored bold moves amid economic volatility. 

Here’s why business leaders should include more offense in their gameplan. 

Let’s look at past recessions. 

In 2010, Analysts at Harvard Business Review studied the strategy selection of 4,700 public companies across three past global recessions. The goal: determine the best strategies for surviving and outperforming competitors during an economic downturn. 

Here’s the breakdown: 

  • 17% of the companies studied went bankrupt, were bought out, or went private
  • 80% hadn’t reached their pre-recession performance 3 years after a recession
  • 9% performed better during a recession than they did before it 

How did the top performers do it? The HBR business analysts concluded that the strongest strategies leveraged bold bets. 

Companies that were fast to play defense – i.e. through cutting costs – were far less likely to outperform their competitors post-recession, according to the study. By contrast, companies that invested more than their competitors had a higher likelihood of enjoying post-recession success, and the best-performing companies were ones that balanced the right mix of offense and defense. 

Strategy 1: Balancing offense and defense to generate operational efficiency 

The HBR analysts argue that the right balance prioritizes operational efficiency. Not too much cost reduction, not too heavy of investment. 

“Companies that rely solely on cutting the workforce have only an 11% probability of achieving breakaway performance after a downturn,” the study reports. They often fail to relaunch because rehiring costs are too high, morale is difficult to replenish, and recruits are hesitant to join a company that turns to layoffs during tough times. 

On the other hand, the best performers post-recession were ones that cut costs by finding operational inefficiencies. Once the recession was over, those cost savings stayed. 

The best performers also placed smart offensive bets. For example, the study highlighted Target as a company that increased spending in the marketing and sales departments but uncovered and squashed inefficiencies in its supply chain in the 2000 recession. 

The result: Target’s sales grew 40% and profits grew 50% during the recession. 

Strategy 2: Going all in on offense

A more recent HBR study has a somewhat different take. The report concluded that after examining midsize companies during the Great Recession, “playing offense dominates playing defense.”

The analysts found that companies that invested heavily in infrastructure, talent, customer acquisition, and innovation saw comparatively better performance after the recession than companies that didn’t. And it wasn’t close. 

Credit: Harvard Business Review

“Recessions are fertile ground for creative destruction and catapulting new winners,” the analysts analyze. “It’s an ideal time to deploy new technologies, whose deployment during a boom phase would slow down the firm’s profit engine.”

One example of a company capitalizing on the market downturn is Samsung. When the economy spiraled out of control in 2008, the company made headlines for heavy investment. 

It worked. The company’s revenue grew by hundreds of billions of dollars in the period between 2009 and 2018.  

Strategy 3: Leverage Invisible as a growth partner 

Navigating uncertainty is our speciality. With next-gen outsourcing powered by automation, we unlock an offensive playbook for companies that want to win both during and after the looming recession. 

Here’s what that looks like for you: we create operational efficiency for your organization by operating, refining, and scaling, business processes that you should be growing, not cutting, when times get tough. 

For example, we helped the largest players in food delivery capitalize on the economic downturn caused by COVID-19 by helping them onboard hundreds of thousands of restaurants to their platforms - relatively overnight. Learn how here

Once we take on your time-consuming and costly business processes, we continuously improve them to make them more efficient over time by driving down operational costs. That’s when you start playing offense - turn your cost savings into executing an aggressive business strategy that helps your company not only survive the recession but turn into a post-recession powerhouse. 

Work smart, move fast, and focus on what matters most - with Invisible.

What are Your Top 3 moments at Invisible?

01|

02|

03|

Andrew Hull

In response to an economic downturn, most businesses operate by the same playbook. Cut costs, scrap projects, freeze hiring, and lay off employees - all of these are strategies to weather the storm. 

Companies are taking these steps already over the concern of a looming recession. Perhaps after over hiring amid unprecedented success, the tech sector is seeing the largest correction with over 70,000 layoffs at 527 companies so far in 2022. 

We surveyed hundreds of business leaders across industries this week to confirm whether this is happening outside of tech, too. We found that companies are overwhelmingly playing defense. 

  • 70% of companies are actively preparing for a recession 
  • 30% are cutting administrative costs 
  • 29% are freezing hiring
  • 24% are cutting projects
  • 20% are downsizing

Is defense the right play? It turns out that history has repeatedly favored bold moves amid economic volatility. 

Here’s why business leaders should include more offense in their gameplan. 

Let’s look at past recessions. 

In 2010, Analysts at Harvard Business Review studied the strategy selection of 4,700 public companies across three past global recessions. The goal: determine the best strategies for surviving and outperforming competitors during an economic downturn. 

Here’s the breakdown: 

  • 17% of the companies studied went bankrupt, were bought out, or went private
  • 80% hadn’t reached their pre-recession performance 3 years after a recession
  • 9% performed better during a recession than they did before it 

How did the top performers do it? The HBR business analysts concluded that the strongest strategies leveraged bold bets. 

Companies that were fast to play defense – i.e. through cutting costs – were far less likely to outperform their competitors post-recession, according to the study. By contrast, companies that invested more than their competitors had a higher likelihood of enjoying post-recession success, and the best-performing companies were ones that balanced the right mix of offense and defense. 

Strategy 1: Balancing offense and defense to generate operational efficiency 

The HBR analysts argue that the right balance prioritizes operational efficiency. Not too much cost reduction, not too heavy of investment. 

“Companies that rely solely on cutting the workforce have only an 11% probability of achieving breakaway performance after a downturn,” the study reports. They often fail to relaunch because rehiring costs are too high, morale is difficult to replenish, and recruits are hesitant to join a company that turns to layoffs during tough times. 

On the other hand, the best performers post-recession were ones that cut costs by finding operational inefficiencies. Once the recession was over, those cost savings stayed. 

The best performers also placed smart offensive bets. For example, the study highlighted Target as a company that increased spending in the marketing and sales departments but uncovered and squashed inefficiencies in its supply chain in the 2000 recession. 

The result: Target’s sales grew 40% and profits grew 50% during the recession. 

Strategy 2: Going all in on offense

A more recent HBR study has a somewhat different take. The report concluded that after examining midsize companies during the Great Recession, “playing offense dominates playing defense.”

The analysts found that companies that invested heavily in infrastructure, talent, customer acquisition, and innovation saw comparatively better performance after the recession than companies that didn’t. And it wasn’t close. 

Credit: Harvard Business Review

“Recessions are fertile ground for creative destruction and catapulting new winners,” the analysts analyze. “It’s an ideal time to deploy new technologies, whose deployment during a boom phase would slow down the firm’s profit engine.”

One example of a company capitalizing on the market downturn is Samsung. When the economy spiraled out of control in 2008, the company made headlines for heavy investment. 

It worked. The company’s revenue grew by hundreds of billions of dollars in the period between 2009 and 2018.  

Strategy 3: Leverage Invisible as a growth partner 

Navigating uncertainty is our speciality. With next-gen outsourcing powered by automation, we unlock an offensive playbook for companies that want to win both during and after the looming recession. 

Here’s what that looks like for you: we create operational efficiency for your organization by operating, refining, and scaling, business processes that you should be growing, not cutting, when times get tough. 

For example, we helped the largest players in food delivery capitalize on the economic downturn caused by COVID-19 by helping them onboard hundreds of thousands of restaurants to their platforms - relatively overnight. Learn how here

Once we take on your time-consuming and costly business processes, we continuously improve them to make them more efficient over time by driving down operational costs. That’s when you start playing offense - turn your cost savings into executing an aggressive business strategy that helps your company not only survive the recession but turn into a post-recession powerhouse. 

Work smart, move fast, and focus on what matters most - with Invisible.

Overview

LLM Task

Benchmark Dataset/Corpus

Sentiment Analysis

SST-1/SST-2

Natural Language Inference /  Recognizing Textual Entailment

Stanford Natural Language Inference Corpus (SNLI)

Named Entity Recognition

conll-2003

Question Answering

SQuAD

Machine Translation

WMT

Text Summarization

CNN/Daily Mail Dataset

Text Generation

WikiText

Paraphrasing

MRPC

Language Modelling

Penn Tree Bank

Bias Detection

StereoSet

LLM Task

Benchmark Dataset/Corpus

Common Metric

Dataset available at

Sentiment Analysis

SST-1/SST-2

Accuracy

https://huggingface
.co/datasets/sst2

Natural Language Inference /  Recognizing Textual Entailment

Stanford Natural Language Inference Corpus (SNLI)

Accuracy

https://nlp.stanford.edu
projects/snli/

Named Entity Recognition

conll-2003

F1 Score

https://huggingface.co/
datasets/conll2003

Question Answering

SQuAD

F1 Score, Exact Match, ROUGE

https://rajpurkar.github.i
o/SQuAD-explorer/

Machine Translation

WMT

BLEU, METEOR

https://machinetranslate
.org/wmt

Text Summarization

CNN/Daily Mail Dataset

ROUGE

https://www.tensorflow
.org/datasets/catalog/
cnn_dailymail

Text Generation

WikiText

BLEU, ROUGE

https://www.salesforce.
com/products/einstein/
ai-research/the-wikitext-dependency-language-modeling-dataset/

Paraphrasing

MRPC

ROUGE, BLEU

https://www.microsoft.
com/en-us/download/details.a
spx?id=52398

Language Modelling

Penn Tree Bank

Perplexity

https://zenodo.org/recor
d/3910021#.ZB3qdHbP
23A

Bias Detection

StereoSet

Bias Score, Differential Performance

https://huggingface.co/
datasets/stereoset

Table 1 - Example of some LLM tasks with common benchmark datasets and their respective metrics. Please note for many of these tasks, there are multiple benchmark datasets, some of which have not been mentioned here.

Metric Selection

Metric

Usage

Accuracy

Measures the proportion of correct predictions made by the model compared to the total number of predictions.

Precision

Measures the proportion of true positives out of all positive predictions.

Recall

Measures the proportion of true positives out of all actual positive instances.

F1 Score

Measures the harmonic mean of precision and recall.

Perplexity

Measures the model's uncertainty in predicting the next token (common in text generation tasks).

BLEU

Measures the similarity between machine-generated text and reference text.

ROUGE

Measures the similarity between machine-generated and human-generated text.

METEOR

May have higher computational complexity compared to BLEU or ROUGE.Requires linguistic resources for matching, which may not be available for all languages.

Pros

Cons

Simple interpretability. Provides an overall measure of model performance.

Sensitive to dataset imbalances, which can make it not informative. Does not take into account false positives and false negatives.

Useful when the cost of false positives is high. Measures the accuracy of positive predictions.

Does not take into account false negatives.Depends on other metrics to be informative (cannot be used alone).Sensitive to dataset imbalances.

Useful when the cost of false negatives is high.

Does not take into account false negatives.Depends on other metrics to be informative (cannot be used alone)and Sensitive to dataset imbalances.

Robust to imbalanced datasets.

Assumes equal importance of precision and recall.May not be suitable for multi-class classification problems with different class distributions.

Interpretable as it provides a single value for model performance.

May not directly correlate with human judgment.

Correlates well with human judgment.Easily interpretable for measuring translation quality.

Does not directly explain the performance on certain tasks (but correlates with human judgment).Lacks sensitivity to word order and semantic meaning.

Has multiple variants to capture different aspects of similarity.

May not capture semantic similarity beyond n-grams or LCS.Limited to measuring surface-level overlap.

Addresses some limitations of BLEU, such as recall and synonyms.

May have higher computational complexity compared to BLEU or ROUGE.Requires linguistic resources for matching, which may not be available for all languages.

Metric

Usage

Pros

Cons

Accuracy

Measures the proportion of correct predictions made by the model compared to the total number of predictions.

Simple interpretability. Provides an overall measure of model performance.

Sensitive to dataset imbalances, which can make it not informative. Does not take into account false positives and false negatives.

Precision

Measures the proportion of true positives out of all positive predictions.

Useful when the cost of false positives is high. Measures the accuracy of positive predictions.

Does not take into account false negatives.Depends on other metrics to be informative (cannot be used alone).Sensitive to dataset imbalances.

Recall

Measures the proportion of true positives out of all actual positive instances.

Useful when the cost of false negatives is high.

Does not take into account false negatives.Depends on other metrics to be informative (cannot be used alone)and Sensitive to dataset imbalances.

F1 Score

Measures the harmonic mean of precision and recall.

Robust to imbalanced datasets.

Assumes equal importance of precision and recall.May not be suitable for multi-class classification problems with different class distributions.

Perplexity

Measures the model's uncertainty in predicting the next token (common in text generation tasks).

Interpretable as it provides a single value for model performance.

May not directly correlate with human judgment.

BLEU

Measures the similarity between machine-generated text and reference text.

Correlates well with human judgment.Easily interpretable for measuring translation quality.

Does not directly explain the performance on certain tasks (but correlates with human judgment).Lacks sensitivity to word order and semantic meaning.

ROUGE

Measures the similarity between machine-generated and human-generated text.

Has multiple variants to capture different aspects of similarity.

May not capture semantic similarity beyond n-grams or LCS.Limited to measuring surface-level overlap.

METEOR

Measures the similarity between machine-generated translations and reference translations.

Addresses some limitations of BLEU, such as recall and synonyms.

May have higher computational complexity compared to BLEU or ROUGE.Requires linguistic resources for matching, which may not be available for all languages.

Table 2 - Common LLM metrics, their usage as a measurement tool, and their pros and cons. Note that for some of these metrics there exist different versions. For example, some of the versions of ROUGE include ROUGE-N, ROUGE-L, and ROUGE-W. For context, ROUGE-N measures the overlap of sequences of n-length-words between the text reference and the model-generated text. ROUGE-L measures the overlap between the longest common subsequence of tokens in the reference text and generated text, regardless of order. ROUGE-W on the other hand, assigns weights (relative importances) to longer common sub-sequences of common tokens (similar to ROUGE-L but with added weights). A combination of the most relevant variants of a metric, like ROUGE is selected for comprehensive evaluation.

Andrew Hull

Schedule a call to learn more about how Invisible might help your business grow while navigating uncertainty.

Schedule a Call
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo