You can detect and correct poorly structured prompts using an LLM by combining a classification head for prompt quality assessment with a generation head for rewriting.
Here is the code snippet below:

In the above code, we are using the following key points:
-
A pre-trained seq2seq model (T5) fine-tuned or prompted for grammar correction.
-
A custom input format (fix prompt:) to signal instruction-specific tasks.
-
Tokenization and decoding to generate structured, readable prompts.
Hence, combining prompt quality evaluation with correction using a generative model improves prompt clarity and model performance.