Abstract: Invisible conducted this survey of 600 randomly sampled American business leaders on February 1st, 2023. Respondents’ organizational roles included C-Suite executives, senior management, and middle management. We ran a similar survey back in September 2022 which can be read here.
Q1. Has your concern over an economic downturn increased, decreased, or stayed the same in the last 3 months?
Q2. Is your company preparing for a recession? If so, how?
*Percent (Answers) is calculated by dividing each answer count by the total counts collected.
Q3. As a result of economic volatility, has your company’s revenue increased, decreased, or stayed the same?
Q4. Since the start of the new year, has your company adopted more automation technology in response to economic volatility?
Q5. Do you plan on adding more automation technology to your business in the next 3 months?
Q6. Have you introduced any new AI tools to your workflow in the past 3 months?
Q7. What sort of work are you automating or augmenting with AI?
*Percent (Answers) is calculated by dividing each answer count by the total counts collected.
Q8. What value are you trying to create by using AI?
*Percent (Answers) is calculated by dividing each answer count by the total counts collected.
Q9. Have you used the AI tool ChatGPT for work?
Q10. Is your business actively hiring or is hiring on hold?
Q11. Would your business consider outsourcing and/or automation to do work left by hiring gaps?
01|
02|
03|
Abstract: Invisible conducted this survey of 600 randomly sampled American business leaders on February 1st, 2023. Respondents’ organizational roles included C-Suite executives, senior management, and middle management. We ran a similar survey back in September 2022 which can be read here.
Q1. Has your concern over an economic downturn increased, decreased, or stayed the same in the last 3 months?
Q2. Is your company preparing for a recession? If so, how?
*Percent (Answers) is calculated by dividing each answer count by the total counts collected.
Q3. As a result of economic volatility, has your company’s revenue increased, decreased, or stayed the same?
Q4. Since the start of the new year, has your company adopted more automation technology in response to economic volatility?
Q5. Do you plan on adding more automation technology to your business in the next 3 months?
Q6. Have you introduced any new AI tools to your workflow in the past 3 months?
Q7. What sort of work are you automating or augmenting with AI?
*Percent (Answers) is calculated by dividing each answer count by the total counts collected.
Q8. What value are you trying to create by using AI?
*Percent (Answers) is calculated by dividing each answer count by the total counts collected.
Q9. Have you used the AI tool ChatGPT for work?
Q10. Is your business actively hiring or is hiring on hold?
Q11. Would your business consider outsourcing and/or automation to do work left by hiring gaps?
LLM Task
Benchmark Dataset/Corpus
Common Metric
Dataset available at
Sentiment Analysis
SST-1/SST-2
Accuracy
https://huggingface
.co/datasets/sst2
Natural Language Inference / Recognizing Textual Entailment
Stanford Natural Language Inference Corpus (SNLI)
Accuracy
https://nlp.stanford.edu
projects/snli/
Named Entity Recognition
conll-2003
F1 Score
https://huggingface.co/
datasets/conll2003
Question Answering
SQuAD
F1 Score, Exact Match, ROUGE
https://rajpurkar.github.i
o/SQuAD-explorer/
Machine Translation
WMT
BLEU, METEOR
https://machinetranslate
.org/wmt
Text Summarization
CNN/Daily Mail Dataset
ROUGE
https://www.tensorflow
.org/datasets/catalog/
cnn_dailymail
Text Generation
WikiText
BLEU, ROUGE
Paraphrasing
MRPC
ROUGE, BLEU
https://www.microsoft.
com/en-us/download/details.a
spx?id=52398
Language Modelling
Penn Tree Bank
Perplexity
https://zenodo.org/recor
d/3910021#.ZB3qdHbP
23A
Bias Detection
StereoSet
Bias Score, Differential Performance
Table 1 - Example of some LLM tasks with common benchmark datasets and their respective metrics. Please note for many of these tasks, there are multiple benchmark datasets, some of which have not been mentioned here.
Metric
Usage
Pros
Cons
Accuracy
Measures the proportion of correct predictions made by the model compared to the total number of predictions.
Simple interpretability. Provides an overall measure of model performance.
Sensitive to dataset imbalances, which can make it not informative. Does not take into account false positives and false negatives.
Precision
Measures the proportion of true positives out of all positive predictions.
Useful when the cost of false positives is high. Measures the accuracy of positive predictions.
Does not take into account false negatives.Depends on other metrics to be informative (cannot be used alone).Sensitive to dataset imbalances.
Recall
Measures the proportion of true positives out of all actual positive instances.
Useful when the cost of false negatives is high.
Does not take into account false negatives.Depends on other metrics to be informative (cannot be used alone)and Sensitive to dataset imbalances.
F1 Score
Measures the harmonic mean of precision and recall.
Robust to imbalanced datasets.
Assumes equal importance of precision and recall.May not be suitable for multi-class classification problems with different class distributions.
Perplexity
Measures the model's uncertainty in predicting the next token (common in text generation tasks).
Interpretable as it provides a single value for model performance.
May not directly correlate with human judgment.
BLEU
Measures the similarity between machine-generated text and reference text.
Correlates well with human judgment.Easily interpretable for measuring translation quality.
Does not directly explain the performance on certain tasks (but correlates with human judgment).Lacks sensitivity to word order and semantic meaning.
ROUGE
Measures the similarity between machine-generated and human-generated text.
Has multiple variants to capture different aspects of similarity.
May not capture semantic similarity beyond n-grams or LCS.Limited to measuring surface-level overlap.
METEOR
Measures the similarity between machine-generated translations and reference translations.
Addresses some limitations of BLEU, such as recall and synonyms.
May have higher computational complexity compared to BLEU or ROUGE.Requires linguistic resources for matching, which may not be available for all languages.
Table 2 - Common LLM metrics, their usage as a measurement tool, and their pros and cons. Note that for some of these metrics there exist different versions. For example, some of the versions of ROUGE include ROUGE-N, ROUGE-L, and ROUGE-W. For context, ROUGE-N measures the overlap of sequences of n-length-words between the text reference and the model-generated text. ROUGE-L measures the overlap between the longest common subsequence of tokens in the reference text and generated text, regardless of order. ROUGE-W on the other hand, assigns weights (relative importances) to longer common sub-sequences of common tokens (similar to ROUGE-L but with added weights). A combination of the most relevant variants of a metric, like ROUGE is selected for comprehensive evaluation.