top of page
LOGO.png

The Best Way to Prompt AI: Building PromptSmith

The Best Way to Prompt AI: Building PromptSmith

TL;DR: I built a lightweight, local tool that turns brief ideas into structured, model-ready prompts—often in XML—showing the best way to prompt AI for clearer, more consistent outputs.

Context: Why This Specific Sub-Project (for the best way to prompt AI)

  • Audience: People dabbling in AI who want better outputs without getting overly technical.

  • Pain point: Quick notes don’t translate into well-structured prompts, so models misinterpret or under-deliver.

  • Success definition: A repeatable way to expand short ideas into high-quality prompts, plus mini-tools for images, programming prompts, and thumbnails.

What I Built

Goal: Turn my scrappy, brief prompts into clear, structured instructions that models follow reliably.

Scope (sub-project only): Prompt expansion and optimisation; XML formatting where helpful; mini-tools for image prompt templates, a program-builder that considers current libraries, and a thumbnail helper.

Stack & Tools

  • GPT-5 API for prompt expansion and refinement.

  • XML formatting to add structure when models benefit from schema-like inputs.

  • Gemini for multimodal bits, e.g., analysing references for thumbnails.

  • Local app in Microsoft Edge (runs as an app-like tab) for speed, privacy, and focus.

Timeline

 <br>

Date

Milestone

Owner

Notes

2025-08-02

Kicked off initial build

Jacques

Set up base prompt improver

Ongoing

Iteration & modular add-ons

Jacques

Program builder & thumbnails

 <br>

Process (Step-by-Step)

  1. Map the pain. I listed my common “too-brief” prompts and defined a target structure (sections, parameters, constraints). Trade-off: keep it simple vs. add knobs—kept it simple first.

  2. Integrate GPT-5 early. Docs were thin, so I shipped a minimal path that did expansion and basic constraint checks, then layered features.

  3. Tame XML bleed-through. Early runs saw the model echo internal scaffolding. I separated stages and enforced stricter output boundaries to stop self-prompting.

  4. Add multimodal helper. Gemini analyses rough visual refs and proposes cleaner thumbnail directions.

  5. Package locally. Wrapped it as an Edge app-like tab for quick access and privacy.

Pull-quote: “When a thought’s half-baked, PromptSmith plates it up so the model actually gets it.”

Results

  • Clarity: Brief notes become structured prompts the model can follow.

  • Consistency: Structured outputs (often XML) reduce odd model behaviour.

  • Speed: Faster from idea → usable prompt, especially for programming and thumbnail briefs.

Obstacles & How I Solved Them

  • Issue: Early GPT-5 documentation gaps.Fix: Ship a minimal slice and iterate in short loops.Why it worked: Smaller surface area for errors and faster feedback.

  • Issue: XML conversion causing self-referential outputs.Fix: Stage separation and stricter output boundaries.Why it worked: Prevented the model from internalising scaffolding.

  • Issue: Feature creep vs. time.Fix: Modularise—keep the core improver simple; add tools as optional blocks.Why it worked: Maintains momentum without bloat.

Lessons for AI Dabblers

  • Structure wins. Even light sections and parameters beat a rambling prompt.

  • Separate concerns. Generate content first; format/convert second to avoid bleed-through.

  • Keep it close. A quick local wrapper means you actually use the tool.

Why It Mattered to Me

I think fast and type faster.PromptSmith lets me keep that pace while giving models the structure they need.It’s my safety net for “did I specify enough?” across text, code ideas, and visual briefs.

FAQ

Q: Why use XML for prompts?A: It adds explicit structure—sections and attributes reduce ambiguity and guide model attention.

Q: Can multiple models play nicely in one flow?A: Yes. Use each where it shines, and keep clear boundaries between stages.

chrome_VUk2nGIEsl.jpg
bottom of page