Assessing fluency with Critique.
This page describes how you can use Critique to assess the fluency of text.
What is Fluency?
Fluency refers to whether the text has formatting problems, capitalization errors or obviously ungrammatical sentences (e.g., fragments, missing components) that make the text difficult to read.
For example, the following text is not fluent:
- not fluent sentence: βMe go store buy bread.β
- fluent version: βI am going to the store to buy bread.β
How to use the Critique API
Critique provides APIs that you can use to assess the fluency of text. You can prepare your data in the following way:
dataset = [
{
"source": "I will go to travel next week.",
"target": "Me go store buy bread."
}
]
Then choose a suitable metric and config setting:
metric = "uni_eval"
config = {
"task": "summarization",
"evaluation_aspect": "fluency",
}
Finally, you can evaluate your dataset using the Critique API:
from inspiredco import critique
client = critique.Critique(api_key=os.environ["INSPIREDCO_API_KEY"])
result = client.evaluate(metric=metric, config=config, dataset=dataset)
Various Metrics/Configurations for Toxicity Evaluation
So far, Critique supports the following metrics for factual consistency evaluation: