Not sure where to start?

Book a free AI Assessment. We'll audit your operations, identify the highest-ROI opportunities, and recommend the right path for you.

© 2025, One Second AI

2025

Not sure where to start?

Book a free AI Assessment. We'll audit your operations, identify the highest-ROI opportunities, and recommend the right path for you.

© 2025, One Second AI

2025

Not sure where to start?

Book a free AI Assessment. We'll audit your operations, identify the highest-ROI opportunities, and recommend the right path for you.

© 2025, One Second AI

2025

AI Learnings

Context > Prompt: How We Actually Get Results from AI

Co-founder @ One Second AI

Nuutti Räisänen

by

Nuutti Räisänen

Co-founder & CRO @ One Second AI

Follow me on:

Last updated

Oct 2, 2025

The approach we use internally and teach to clients, because "magic prompts" don't exist.

Stop prompting like it's 2022.

Nobody taught you how to talk to AI. So most people guess. They hunt for "the right prompt", some magical series of words that somehow makes everything work.

Spoiler: it doesn't work that way.

After building AI revenue infrastructure for dozens of mid-market businesses, we've learned that the difference between AI that impresses and AI that performs comes down to one thing: context.

Not prompts. Context.

By the end of this article, you'll understand:

  • The science behind context engineering

  • The 5-step framework we use for every AI interaction

  • 7 ready-to-use templates you can copy today

Why Context Beats Prompts

Most people treat AI like a vending machine. Insert the right code, get the right output. But that's not how language models work.

Instead of trying to get everything perfect on the first try, you should structure your goal so the model implicitly understands the rest. The research backs this up: goal-oriented formulation, which guides LLMs to follow established human logical thinking, significantly improves performance.

The principle is simple: Say what you want, then have a conversation with AI.

Here's the 5-step framework we use:

Step 1: Define the Goal, Not the Process

The core principle: Articulate the end state you want to reach, not the steps to get there.

The model understands the goal and reasons backward to determine steps—just like a natural cognitive process.

Instead of this:

"First, analyse the data. Then, identify patterns. Finally, write conclusions."

Do this:

"I need output where [final state is clearly described]. The audience is [who they are]. Success looks like [what would satisfy you]."

Example you can use immediately:

"I need a framework document that helps non-technical stakeholders understand our AI implementation roadmap without overwhelming them with technical details."

Notice what's embedded in that single sentence:

  • Output type: framework document

  • Purpose: help stakeholders understand

  • Constraint: non-technical stakeholders

  • Anti-goal: no overwhelming technical details

One sentence. Four pieces of critical context. No step-by-step instructions.

Step 2: Specify Constraints, Not Rules

The key distinction: Constraints describe what the output is not or what boundaries it operates within. Rules describe procedural steps.

Research shows goals require explicit constraints like conditions, ordering, and blocking, but for modern LLMs, these work best when expressed as performance boundaries rather than execution rules.

Three constraint types to communicate:

Domain constraints, The field or context

  • "Medical context" vs. "General knowledge"

  • "For a 10-year-old" vs. "For domain experts"

Quality constraints, What matters most in the output

  • "Prioritise brevity" vs. "Comprehensiveness"

  • "Formal tone" vs. "Conversational"

  • "Practical implementation focus" vs. "Theoretical completeness"

Scope boundaries, What should NOT be included

  • "Avoid citations" vs. "Include academic sources"

  • "Don't mention limitations" vs. "Highlight risks"

  • "Skip introductions" vs. "Start with context"

How to articulate constraints naturally:

Rather than listing them as rules, embed them in your goal statement.

Weak:

"Follow these rules: 1) Keep it brief, 2) Use simple language, 3) No jargon."

Strong:

"The output should work for someone encountering this topic for the first time, so focus on clarity and practical examples rather than technical terminology."

Same constraints. Completely different signal to the model.

Step 3: Give Examples

Models learn intent more effectively from what you show them than what you tell them.

This is one of the highest-leverage context moves available to you:

  • If you need a specific format, paste or describe one good example

  • If you need a particular tone, quote a sentence that demonstrates it

  • If you need a specific depth, show what "too shallow" vs. "good depth" looks like

Structure:

"Here's an example of the level of detail I'm looking for: [example]. Here's what good looks like: [example]."

In our Symphony implementations, we pre-load examples into every agent workflow. The agent has seen what good output looks like before it ever starts generating.

Step 4: Embed Performance Criteria

Instead of "maximise X," specify "I need X at this specific level because..."

Weak: "Make it accurate" Strong: "The facts should be verifiable, something a reader could fact-check in 2 minutes online"

Weak: "Be concise" Strong: "Each section should fit in a single paragraph so this works as a quick reference"

Weak: "Explain it well" Strong: "Someone should be able to understand this without searching for definitions of technical terms"

The difference is specificity at the outcome level. You're describing what success looks like in observable terms, not abstract qualities.

Step 5: Hierarchical Goal Structure (For Complex Tasks)

Research on hierarchical decomposition shows that complex goals break down into sub-goals—but the model handles this internally if you structure your goal statement correctly.

Here's the structure for hierarchical goals:

  1. State the primary goal (what you ultimately need)

  2. Name the intermediate stages (what should happen in sequence, without prescribing how)

  3. Specify constraints between stages (what must happen before what)

Example:

"I need an analysis where:

  • First, you assess the current state [constraint: based only on the data provided]

  • Then, you identify patterns [constraint: patterns should have at least 3 supporting examples each]

  • Finally, you recommend actions [constraint: only actions that address the identified patterns]"

Notice: You're stating what each stage accomplishes, not how to accomplish it. The model reasons about the how.

The Template We Use Internally

Here's the structure we apply to every significant AI interaction:

[Primary Goal]: I need [output type] that [accomplishes what]

[Context]: This is for [audience/domain], where [what matters]

[Examples & Performance]: Success means [specific observable outcome], not [what failure looks like]

[Constraints]: Focus on [priority], avoid [anti-priority]

[Outcome]: The reader should [what they'll be able to do after]

Real example we've used:

"I need a strategy document that helps my team decide whether to adopt an AI tool for our workflow.

This is for non-technical product managers who need to understand trade-offs quickly.

Success means they can explain the three key decisions to leadership, not that they understand the underlying architecture.

Focus on practical implications and concrete use cases. Avoid deep technical explanations or comparison with competitors.

After reading, they should be able to articulate why this tool matters for our specific situation."

The Principles Behind the Framework

Goal-oriented framing beats instruction-oriented. Tell the model what you're trying to achieve, not how to achieve it.

Constraints > Rules: Specify boundaries and priorities, let the model choose execution.

Examples > Prescriptions: Show what you want; don't describe it.

Specificity at the outcome level, flexibility at the process level: Know what success looks like, be flexible about how.

Internal reasoning is powerful: Modern models handle complexity internally; your job is to communicate intent, not decompose the task.

7 Templates You Can Use Today

Here are seven context-engineered prompts we use regularly. Replace the [brackets] with your own context.

1. Strategic Decision Memo

Use when deciding whether to adopt an AI tool or approach.

Act like a senior product strategist.

Primary goal: I need a 2-page decision memo on whether we should adopt [AI tool] for [team/workflow]. The memo must end with a clear "yes, no, or test first" recommendation.

Context: This is for non-technical executives who care about risk, cost, and impact on our current workflow—not model details.

Constraints:
- Focus on business impact, change management, and timelines
- Avoid technical jargon about models or infrastructure
- Compare only three options: adopt [AI tool]

2. Cold Email That Actually Sells

Use when you need sales copy that converts, not spell-checking.

Act like a senior SDR writing cold emails for B2B SaaS.

Primary goal: Rewrite the email below so it gets more replies from [ICP, e.g., "Heads of Marketing at 50–500 person SaaS companies"].

Context:
- Offer: [1 sentence about what you sell]
- Typical objection: [e.g., "we already use another tool"]
- Social proof: [short proof, e.g., "used by 3 public SaaS companies"]

Constraints:
- 120 words max
- No fake urgency. No "hope this email finds you well"
- Tone: confident and practical, not hype
- Make the next step very specific and low friction

Performance: Success means a busy VP can scan this on mobile in under 10 seconds and know what this is, why it matters now, and the exact next step.

Outcome: Give me 2 versions:
1. Direct and blunt
2. Softer and more relationship-focused

Here is my current version that underperforms:
[paste your email here]

3. LinkedIn Post With 1 Story, 1 Lesson

Use instead of "write a viral LinkedIn post about AI."

Act like a LinkedIn ghostwriter for a founder.

Primary goal: Write a post that tells one specific story about [experience, e.g., "the first time I shipped an AI feature that broke in production"] and teaches one clear lesson about using AI at work.

Context:
- Audience: non-technical professionals who feel behind on AI but are not beginners in their jobs
- Platform: LinkedIn feed on mobile
- My positioning: [e.g., "plain-English AI for operators"]

Constraints:
- 250 words
- Strong hook in the first 2 lines that makes a busy operator stop scrolling
- Exactly 1 story and 1 clear takeaway
- No generic lines like "AI is the future"
- No emoji inside the text

Examples: Here is a past post that matches my voice and structure:
[paste an example or describe it briefly]

4. 30-Day Learning Plan

Use when you want to go from "no idea" to "competent enough to use it."

Act like a personal tutor and learning designer.

Primary goal: Create a 30-day learning plan to go from beginner to competent in [skill, e.g., "pricing SaaS products"], with 30–45 minutes per day.

Context:
- My background: [short, e.g., "senior marketer, no formal finance training"]
- My goal: [e.g., "price my own product and understand the trade-offs"]
- Preferred learning style: [reading, exercises, videos, or mix]

5. Product Spec Engineers Can Actually Use

Use to turn a fuzzy idea into something design and engineering can act on.

Act like a product manager writing a lean spec.

Primary goal: Write a 1-page product spec for a new feature called [feature name] that helps [target user] achieve [user outcome] using [AI capability, e.g., "auto-summarisation"].

Context:
- Company: [1–2 lines]
- Users: [who they are and what they do]
- Tech reality: We have [brief stack and any hard constraints]

6. Customer Interview Synthesis

Use when you have a pile of notes and need sharp insight.

Act like a qualitative researcher.

Primary goal: Turn the interview notes below into clear, actionable insights for the product team.

Context:
- Product: [1–2 lines]
- Interviewees: [who they are, role, segment]

7. Personal Operating System With AI

Use when you want AI to help you run your week, not just answer questions.

Act like my personal chief of staff.

Primary goal: Design a simple weekly operating system that uses AI to help me prioritise, decide faster, and protect focus time.

Context:
- My role: [e.g., "solo founder with multiple projects"]
- Time constraints: [family, hours per week, non-negotiables]
- Current pain points: [e.g., "too many inputs, unclear priorities"]

The Bottom Line

You're not looking for magic words. You're telling the model what success looks like, for whom, and within which boundaries.

Context engineering is the skill that separates people who "use AI" from people who get results from AI. It's the same skill we embed into every autonomous agent we build for clients, because agents without proper context are just expensive random generators.

Pick one of these templates. Replace the brackets with your real context. Paste it and iterate.

The model will meet you where you are. Your job is to show it where you're going.

One Second AI builds AI revenue infrastructure for mid-market businesses. Our Symphony transformation replaces manual sales and marketing operations with autonomous AI agents, all built on context engineering principles that actually work.

FAQ

Frequently asked questions

Co-founder @ One Second AI

Nuutti Räisänen

Co-founder @ One Second AI

Nuutti Räisänen

Co-founder @ One Second AI

Nuutti Räisänen
What does One Second AI actually do?

What does One Second AI actually do?

We help companies move from manual sales and marketing work to AI-first execution. That means designing the right strategy, then building and deploying autonomous AI agents that handle real workflows, like follow-ups, qualification, routing, reporting, and coordination, inside your existing systems. We don’t sell tools. We build systems that run.

Are you a software company or a consultancy?

Are you a software company or a consultancy?

Neither. and intentionally so. We work as an AI execution partner. That means we combine: - strategy - system design - hands-on implementation - continuous optimization You get working AI agents in production, not slides, prompts, or recommendations you have to implement yourself.

What kinds of companies do you work with?

What kinds of companies do you work with?

We typically work with: B2B companies Scaling teams (often €1M–€50M+ revenue) Sales & marketing teams struggling with manual work, slow pipelines, or tool sprawl Our clients usually know something needs to change, they just don’t want risky pilots or disconnected experiments.

Do we need to replace our current tools or systems?

Do we need to replace our current tools or systems?

No. Our AI agents are designed to work with your existing stack, CRM, marketing tools, communication channels, data sources, and internal processes. The goal is not replacement. The goal is orchestration and automation across what you already use.

How is this different from automation tools or AI copilots?

How is this different from automation tools or AI copilots?

Most tools: - automate individual tasks - require constant human input - don’t learn or adapt Our approach: - deploys autonomous agents, not scripts - connects multiple workflows together - operates continuously - improves based on outcomes Think less “automation” and more AI workforce.

Is this safe for our brand, data, and compliance requirements?

Is this safe for our brand, data, and compliance requirements?

Yes, governance is built in from day one. We design agents with: - brand rules - approval logic where needed - clear boundaries on actions -auditability and monitoring This isn’t experimental AI running loose. It’s controlled, production-grade deployment.

Can we start small before committing long term?

Can we start small before committing long term?

Yes, and many teams do. A common starting point is: - an AI assessment - a strategy workshop - or a focused initial deployment This allows you to validate fit and value before scaling.

What does One Second AI actually do?

What does One Second AI actually do?

We help companies move from manual sales and marketing work to AI-first execution. That means designing the right strategy, then building and deploying autonomous AI agents that handle real workflows, like follow-ups, qualification, routing, reporting, and coordination, inside your existing systems. We don’t sell tools. We build systems that run.

Are you a software company or a consultancy?

Are you a software company or a consultancy?

Neither. and intentionally so. We work as an AI execution partner. That means we combine: - strategy - system design - hands-on implementation - continuous optimization You get working AI agents in production, not slides, prompts, or recommendations you have to implement yourself.

What kinds of companies do you work with?

What kinds of companies do you work with?

We typically work with: B2B companies Scaling teams (often €1M–€50M+ revenue) Sales & marketing teams struggling with manual work, slow pipelines, or tool sprawl Our clients usually know something needs to change, they just don’t want risky pilots or disconnected experiments.

Do we need to replace our current tools or systems?

Do we need to replace our current tools or systems?

No. Our AI agents are designed to work with your existing stack, CRM, marketing tools, communication channels, data sources, and internal processes. The goal is not replacement. The goal is orchestration and automation across what you already use.

How is this different from automation tools or AI copilots?

How is this different from automation tools or AI copilots?

Most tools: - automate individual tasks - require constant human input - don’t learn or adapt Our approach: - deploys autonomous agents, not scripts - connects multiple workflows together - operates continuously - improves based on outcomes Think less “automation” and more AI workforce.

Is this safe for our brand, data, and compliance requirements?

Is this safe for our brand, data, and compliance requirements?

Yes, governance is built in from day one. We design agents with: - brand rules - approval logic where needed - clear boundaries on actions -auditability and monitoring This isn’t experimental AI running loose. It’s controlled, production-grade deployment.

Can we start small before committing long term?

Can we start small before committing long term?

Yes, and many teams do. A common starting point is: - an AI assessment - a strategy workshop - or a focused initial deployment This allows you to validate fit and value before scaling.

What does One Second AI actually do?

What does One Second AI actually do?

We help companies move from manual sales and marketing work to AI-first execution. That means designing the right strategy, then building and deploying autonomous AI agents that handle real workflows, like follow-ups, qualification, routing, reporting, and coordination, inside your existing systems. We don’t sell tools. We build systems that run.

Are you a software company or a consultancy?

Are you a software company or a consultancy?

Neither. and intentionally so. We work as an AI execution partner. That means we combine: - strategy - system design - hands-on implementation - continuous optimization You get working AI agents in production, not slides, prompts, or recommendations you have to implement yourself.

What kinds of companies do you work with?

What kinds of companies do you work with?

We typically work with: B2B companies Scaling teams (often €1M–€50M+ revenue) Sales & marketing teams struggling with manual work, slow pipelines, or tool sprawl Our clients usually know something needs to change, they just don’t want risky pilots or disconnected experiments.

Do we need to replace our current tools or systems?

Do we need to replace our current tools or systems?

No. Our AI agents are designed to work with your existing stack, CRM, marketing tools, communication channels, data sources, and internal processes. The goal is not replacement. The goal is orchestration and automation across what you already use.

How is this different from automation tools or AI copilots?

How is this different from automation tools or AI copilots?

Most tools: - automate individual tasks - require constant human input - don’t learn or adapt Our approach: - deploys autonomous agents, not scripts - connects multiple workflows together - operates continuously - improves based on outcomes Think less “automation” and more AI workforce.

Is this safe for our brand, data, and compliance requirements?

Is this safe for our brand, data, and compliance requirements?

Yes, governance is built in from day one. We design agents with: - brand rules - approval logic where needed - clear boundaries on actions -auditability and monitoring This isn’t experimental AI running loose. It’s controlled, production-grade deployment.

Can we start small before committing long term?

Can we start small before committing long term?

Yes, and many teams do. A common starting point is: - an AI assessment - a strategy workshop - or a focused initial deployment This allows you to validate fit and value before scaling.