AI Learnings
Context > Prompt: How We Actually Get Results from AI

by
Nuutti Räisänen
Co-founder & CRO @ One Second AI
Follow me on:
Last updated
Oct 2, 2025
The approach we use internally and teach to clients, because "magic prompts" don't exist.
Stop prompting like it's 2022.
Nobody taught you how to talk to AI. So most people guess. They hunt for "the right prompt", some magical series of words that somehow makes everything work.
Spoiler: it doesn't work that way.
After building AI revenue infrastructure for dozens of mid-market businesses, we've learned that the difference between AI that impresses and AI that performs comes down to one thing: context.
Not prompts. Context.
By the end of this article, you'll understand:
The science behind context engineering
The 5-step framework we use for every AI interaction
7 ready-to-use templates you can copy today
Why Context Beats Prompts
Most people treat AI like a vending machine. Insert the right code, get the right output. But that's not how language models work.
Instead of trying to get everything perfect on the first try, you should structure your goal so the model implicitly understands the rest. The research backs this up: goal-oriented formulation, which guides LLMs to follow established human logical thinking, significantly improves performance.
The principle is simple: Say what you want, then have a conversation with AI.
Here's the 5-step framework we use:
Step 1: Define the Goal, Not the Process
The core principle: Articulate the end state you want to reach, not the steps to get there.
The model understands the goal and reasons backward to determine steps—just like a natural cognitive process.
Instead of this:
"First, analyse the data. Then, identify patterns. Finally, write conclusions."
Do this:
"I need output where [final state is clearly described]. The audience is [who they are]. Success looks like [what would satisfy you]."
Example you can use immediately:
"I need a framework document that helps non-technical stakeholders understand our AI implementation roadmap without overwhelming them with technical details."
Notice what's embedded in that single sentence:
Output type: framework document
Purpose: help stakeholders understand
Constraint: non-technical stakeholders
Anti-goal: no overwhelming technical details
One sentence. Four pieces of critical context. No step-by-step instructions.
Step 2: Specify Constraints, Not Rules
The key distinction: Constraints describe what the output is not or what boundaries it operates within. Rules describe procedural steps.
Research shows goals require explicit constraints like conditions, ordering, and blocking, but for modern LLMs, these work best when expressed as performance boundaries rather than execution rules.
Three constraint types to communicate:
Domain constraints, The field or context
"Medical context" vs. "General knowledge"
"For a 10-year-old" vs. "For domain experts"
Quality constraints, What matters most in the output
"Prioritise brevity" vs. "Comprehensiveness"
"Formal tone" vs. "Conversational"
"Practical implementation focus" vs. "Theoretical completeness"
Scope boundaries, What should NOT be included
"Avoid citations" vs. "Include academic sources"
"Don't mention limitations" vs. "Highlight risks"
"Skip introductions" vs. "Start with context"
How to articulate constraints naturally:
Rather than listing them as rules, embed them in your goal statement.
Weak:
"Follow these rules: 1) Keep it brief, 2) Use simple language, 3) No jargon."
Strong:
"The output should work for someone encountering this topic for the first time, so focus on clarity and practical examples rather than technical terminology."
Same constraints. Completely different signal to the model.
Step 3: Give Examples
Models learn intent more effectively from what you show them than what you tell them.
This is one of the highest-leverage context moves available to you:
If you need a specific format, paste or describe one good example
If you need a particular tone, quote a sentence that demonstrates it
If you need a specific depth, show what "too shallow" vs. "good depth" looks like
Structure:
"Here's an example of the level of detail I'm looking for: [example]. Here's what good looks like: [example]."
In our Symphony implementations, we pre-load examples into every agent workflow. The agent has seen what good output looks like before it ever starts generating.
Step 4: Embed Performance Criteria
Instead of "maximise X," specify "I need X at this specific level because..."
Weak: "Make it accurate" Strong: "The facts should be verifiable, something a reader could fact-check in 2 minutes online"
Weak: "Be concise" Strong: "Each section should fit in a single paragraph so this works as a quick reference"
Weak: "Explain it well" Strong: "Someone should be able to understand this without searching for definitions of technical terms"
The difference is specificity at the outcome level. You're describing what success looks like in observable terms, not abstract qualities.
Step 5: Hierarchical Goal Structure (For Complex Tasks)
Research on hierarchical decomposition shows that complex goals break down into sub-goals—but the model handles this internally if you structure your goal statement correctly.
Here's the structure for hierarchical goals:
State the primary goal (what you ultimately need)
Name the intermediate stages (what should happen in sequence, without prescribing how)
Specify constraints between stages (what must happen before what)
Example:
"I need an analysis where:
First, you assess the current state [constraint: based only on the data provided]
Then, you identify patterns [constraint: patterns should have at least 3 supporting examples each]
Finally, you recommend actions [constraint: only actions that address the identified patterns]"
Notice: You're stating what each stage accomplishes, not how to accomplish it. The model reasons about the how.
The Template We Use Internally
Here's the structure we apply to every significant AI interaction:
Real example we've used:
"I need a strategy document that helps my team decide whether to adopt an AI tool for our workflow.
This is for non-technical product managers who need to understand trade-offs quickly.
Success means they can explain the three key decisions to leadership, not that they understand the underlying architecture.
Focus on practical implications and concrete use cases. Avoid deep technical explanations or comparison with competitors.
After reading, they should be able to articulate why this tool matters for our specific situation."
The Principles Behind the Framework
Goal-oriented framing beats instruction-oriented. Tell the model what you're trying to achieve, not how to achieve it.
Constraints > Rules: Specify boundaries and priorities, let the model choose execution.
Examples > Prescriptions: Show what you want; don't describe it.
Specificity at the outcome level, flexibility at the process level: Know what success looks like, be flexible about how.
Internal reasoning is powerful: Modern models handle complexity internally; your job is to communicate intent, not decompose the task.
7 Templates You Can Use Today
Here are seven context-engineered prompts we use regularly. Replace the [brackets] with your own context.
1. Strategic Decision Memo
Use when deciding whether to adopt an AI tool or approach.
2. Cold Email That Actually Sells
Use when you need sales copy that converts, not spell-checking.
3. LinkedIn Post With 1 Story, 1 Lesson
Use instead of "write a viral LinkedIn post about AI."
4. 30-Day Learning Plan
Use when you want to go from "no idea" to "competent enough to use it."
5. Product Spec Engineers Can Actually Use
Use to turn a fuzzy idea into something design and engineering can act on.
6. Customer Interview Synthesis
Use when you have a pile of notes and need sharp insight.
7. Personal Operating System With AI
Use when you want AI to help you run your week, not just answer questions.
The Bottom Line
You're not looking for magic words. You're telling the model what success looks like, for whom, and within which boundaries.
Context engineering is the skill that separates people who "use AI" from people who get results from AI. It's the same skill we embed into every autonomous agent we build for clients, because agents without proper context are just expensive random generators.
Pick one of these templates. Replace the brackets with your real context. Paste it and iterate.
The model will meet you where you are. Your job is to show it where you're going.
One Second AI builds AI revenue infrastructure for mid-market businesses. Our Symphony transformation replaces manual sales and marketing operations with autonomous AI agents, all built on context engineering principles that actually work.
Blog
Related articles
FAQ




