Prompt Engineering: How to Actually Get Good Results from AI

Two people can use the exact same AI model and have completely different experiences.

One gets vague, generic output that doesn’t quite hit the mark. The other gets precise, useful, sometimes impressive results. The difference is almost always how they prompt.

This isn’t complicated. But most people skip it.


Why It Matters

A large language model doesn’t have a fixed set of responses. Every output is shaped by the input it receives. The model is trying to be whatever your prompt suggests — a technical expert, a creative writer, a careful editor, a patient teacher.

If your prompt is vague, it fills in the blanks with assumptions that probably don’t match what you need. If it’s specific and well-framed, you’ve given it a lot more to work with.

Think of it less like a search query and more like a briefing to a very capable person who needs context before they can help you.


The Principles That Actually Work

1. Be Specific

The most common cause of a bad AI response is an underspecified prompt.

Vague: “Write something about cybersecurity.”

Specific: “Write a 300-word intro for a blog post aimed at small business owners explaining why basic network segmentation matters. Plain language, no jargon, practical focus.”

Audience. Format. Length. Purpose. Tone. Every piece of that shapes a better response.


2. Give It a Role

LLMs respond well to being given an explicit identity. This isn’t a gimmick — it works because it gives the model a frame to operate from.

Try:

  • “You are a senior network security engineer reviewing this configuration…”
  • “Act as an experienced technical writer simplifying this for a non-technical audience…”
  • “You are a sceptical editor reviewing this for logical consistency…”

The role shifts vocabulary, assumed knowledge level, tone, and focus in genuinely useful ways.


3. Provide Context

The model only knows what you tell it. If you want relevant, personalized output, share relevant background.

Bad: “Help me write a proposal.”

Better: “Help me write a project proposal for a network infrastructure upgrade for a 50-person company. Current setup is outdated hardware with no redundancy. Budget is around $40K. The audience is a non-technical CEO who cares about downtime risk and ROI.”

Context isn’t padding. It’s signal.


4. Specify the Format

If you want bullet points, ask for bullet points. If you want a numbered list, a table, three options with pros and cons — say so. Models default to flowing prose unless told otherwise.

Useful instructions:

  • “Respond in bullet points”
  • “Give me three options with pros and cons for each”
  • “Keep it under 200 words”
  • “Use clear headings”
  • “Format this as a step-by-step guide”

5. Show It What You Want

One of the most effective techniques is providing examples rather than descriptions. Known as few-shot prompting.

If you want something written in a specific style, paste two or three examples and say “write more in this style.” The model picks up on tone, length, structure, and even punctuation in ways that a written description often misses.

I use this constantly for technical documentation where consistency matters.


6. Ask It to Think Step by Step

For anything involving analysis, problem-solving, or decisions — explicitly ask the model to reason through it before answering. This is called chain-of-thought prompting and it genuinely works.

Add:

  • “Think through this step by step before answering”
  • “Walk me through your reasoning”
  • “Consider the key factors before giving your conclusion”

Particularly effective for anything involving logic, trade-offs, or multi-step problems.


7. Treat It as a Conversation

A single prompt rarely produces a final result. The most effective workflow is iterative.

If the first response isn’t right, say so specifically:

  • “Good but too formal — make it more conversational”
  • “The second section is too long, tighten it up”
  • “I like the structure but the examples aren’t relevant — replace them with networking examples”

Think of each exchange as a draft, not a deliverable.


Mistakes to Avoid

Asking multiple questions at once. Ask them separately. Bundled questions get shallow answers across all of them.

Being too vague to be polite. Precision is what gets results. “Give me exactly five examples” beats “Could you maybe give me some examples?”

Trusting it blindly. LLMs can be confidently wrong. Push back. Check important facts. Verify sources independently.


A Reusable Template

Here’s a structure that works across most tasks:

[ROLE] You are a [describe the role or expertise].

[CONTEXT] [Relevant background about your situation or goal.]

[TASK] [Clear description of what you want.]

[FORMAT] [Specify format, length, tone, or structure.]

[CONSTRAINTS] [Things to avoid, style requirements, etc.]

You won’t need every section every time. But having the structure in mind makes it easy to spot when you’re leaving something important out.


The Bottom Line

Prompt engineering is a skill that improves with use. The more you work with these tools, the more intuitive it becomes to frame a request in a way that produces useful output.

The underlying principle is simple: treat the AI like a capable collaborator who needs a clear brief. Give it context, a role, a format, and iterate from there.

The people who get the most out of AI tools aren’t always the most technical. They’re the ones who communicate clearly.

Author: Jon-Paul Walton