LLM prompts and Tips
AI
Bento Grid
-
version 1
{{需要生成的内容}}帮我将这个内容生成一个 HTML 网页,具体要求是使用 Bento Grid 风格的视觉设计,深色风格,强调标题和视觉突出,注意布局合理性和可视化图表、配图的合理性。结果会生成如下的图片 (google gemini)
-
version 2
设计一个现代、简约、高端的产品/服务发布页面,使用 Bento Grid 风格布局,将所有关键信息紧凑地呈现在一个屏幕内。内容要点:【在这里填写内容要点】设计要求:1. 使用 Bento Grid 布局:创建一个由不同大小卡片组成的网格,每个卡片包含特定类别的信息,整体布局要紧凑但不拥挤2. 卡片设计:所有卡片应有明显圆角(20px 边框半径),白色/浅灰背景,细微的阴影效果,悬停时有轻微上浮动效果3. 色彩方案:使用简约配色方案,主要为白色/浅灰色背景,搭配渐变色作为强调色(可指定具体颜色,如从浅紫 #C084FC 到深紫 #7E22CE)4. 排版层次:- 大号粗体数字/标题:使用渐变色强调关键数据点和主要标题- 中等大小标题:用于卡片标题,清晰表明内容类别- 小号文本:用灰色呈现支持性描述文字5. 内容组织:- 顶部行:主要公告、产品特色、性能指标或主要卖点- 中间行:产品规格、技术细节、功能特性- 底部行:使用指南和结论/行动号召6. 视觉元素:- 使用简单图标表示各项特性- 进度条或图表展示比较数据- 网格和卡片布局创造视觉节奏- 标签以小胶囊形式展示分类信息7. 响应式设计:页面应能适应不同屏幕尺寸,在移动设备上保持良好的可读性设计风格参考:- 整体设计风格类似苹果官网产品规格页面- 使用大量留白和简洁的视觉元素- 强调数字和关键特性,减少冗长文字- 使用渐变色突出重要数据- 卡片间有适当间距,创造清晰的视觉分隔结果会生成如下的图片 (google gemini)
社交相框
注:
-
需要上传一张图片作为参考
-
请替换引号内文字
-
本图使用 sora 生成
提示词: 根据所附照片创建一个风格化的 3D Q 版人物角色,准确保留人物的面部特征和服装细节。角色的左手比心(手指上方有红色爱心元素),姿势俏皮地坐在一个巨大的 Instagram 相框边缘,双腿悬挂在框外。相框顶部显示用户名『jennings』,四周漂浮着社交媒体图标(点赞、评论、转发)。
户外绘制肖像
提示词:
创建一个逼真的户外场景,描绘一位街头漫画艺术家正在为附图中的人物绘制肖像。场景中应表现出艺术家坐在画架前作画,对面则是附图中的人物正在接受绘制。环境氛围应热闹、自然且阳光明媚,类似于公园或繁忙的户外区域。整体风格需保持完全写实,唯独艺术家画架上的画作应呈现出色彩丰富、风格俏皮的漫画版人物肖像,具有粗犷的线条、夸张的特征,以及用彩色铅笔和马克笔手绘的质感。请突出真实世界背景与卡通风格画作之间的强烈对比。
Logo 生成
之前,我使用 google imgen 生成一些字母表示 logo,采用下面的 prompt
提示词: Minimalist abstract with the letter “W” , showing audio waves, as the theme, bezier curve segmentation, logo design, follow apple ios/mac design principles,
如何解决 AI 编程助手的模型上下文窗口限制?
- 核心问题
AI 编程助手(如 Cline)的模型上下文窗口(类似“短期记忆”)最大 2M token。塞满超过 50%(1M token)时,AI 性能会下降,可能出错或变慢,因为关键信息会“迷失”。
Cline 的解决方案
- 监控:Cline 实时跟踪上下文使用率
- 规则:通过 .clinerule 设置:若使用率超 50%,自动用 new_task 工具:
- 清空窗口,重启新任务
- 带上关键信息(如代码片段、目标)
- 效果:窗口从 127 万 token 降到 0,AI 恢复高效,历史记录保存
原因与依据
- 原因:AI 处理超长上下文时注意力分散,易忘中间信息
- 依据:IBM 等团队研究显示,上下文超 60% 时性能下降,Cline 的 50% 阈值合理
局限与趋势
- 用户自定义:可调整阈值或选择携带的信息
- 局限:有用户反馈 Cline 对某些模型(Gemini 2.5、Claude 3.7)的兼容性需改进
- 趋势:上下文管理是 AI 编码工具的关键挑战,Cline 方案实用
10 个帮助你提升学习效率的提示
-
模块化分解法
设计一个渐进式学习计划,将[主题]分解成 30 分钟的学习模块,每个模块包含学习目标、关键概念和自我检测题。
-
费曼技巧精炼
请以费曼技巧方式解释[概念],假设你在向 10 岁小孩讲解,使用简单的类比和实例,然后指出你理解中可能存在的漏洞。
-
思维导图聚焦
为[主题]创建一个思维导图,突出核心概念和它们之间的关系,并且标注需要深入研究的领域。
-
间隔记忆闪烁
设计一个间隔重复系统来学习[主题],并提供关键点的闪卡内容和最佳复习时间表。
-
误区咬合分析
分析我对[主题]的理解中可能存在的三个误区,并提供克服这些误区的具体策略。
-
故事化记忆法
将[复杂主题]转化为故事形式,使用情节、角色和冲突来帮助记忆关键概念和它们的关系。
-
知识盲点地图
为[主题]创建一个“你不知道的区域”列表,明确我识别知识盲点并提出探索这些盲点的具体问题。
-
项目驱动掌握
设计一个基于项目的学习方案,通过完成[具体项目]来掌握[主题]的核心概念和技能。
-
概念应用配对
为[主题]创建一个“概念 - 应用”配对表,每个核心概念都有一个现实世界的应用案例和实践练习。
-
二八高效聚焦
分析[主题]中的 20% 内容,它们能带来 80% 的理解和实用价值,并提供掌握这些关键部分的深度学习策略。
让 Cursor 至少提升 10 倍效率的最佳实践
前期规划 (Pre-planning)
-
使用 Cursor 前,先让 Claude 创建结构化 Markdown 计划 (要求其自问澄清问题、自我评审后重新生成), 存入
instructions.md
-
操作流程:向 ChatGPT 说明需求 → 获取给编码 AI 的指令 → 粘贴到 Cursor Composer Agent
-
额外规划层可显著降低问题发生率
-
案例:某项目调试数小时无果后,通过让 ChatGPT 为编码 AI 编写清晰指令,问题迎刃而解
-
规则配置 (Rule Configuration)
-
使用
.cursorrules
文件定义全局开发规则 (始终存在于 AI 上下文中) 参考 https://cursor.directory/-
示例规则:
-
先写测试 → 写代码 → 运行测试 → 迭代更新
-
测试通过前持续修正
-
-
增量开发流程 (Incremental Development Process)
-
采用小步快跑的编辑 - 测试循环:
-
定义微任务/功能增量
-
编写 (或 AI 生成) 失败测试用例
-
在 Agent 模式下让 AI 编写通过测试的代码
-
执行测试
-
失败时 AI 自动分析并修复 → 循环步骤 4
- 测试通过后人工复核
-
调试与优化 (Debugging and Optimization)
-
遇到问题时:
-
让 Cursor 生成问题报告 (含文件清单与问题描述)
-
通过 Claude/ChatGPT 获取修复方案
-
上下文管理 (Context Management)
-
频繁使用
git
进行版本控制,避免过多未提交更改 -
通过
@
显式添加文件保持上下文简洁- 上下文过长时新建会话
-
定期重建索引:
-
使用
.cursorignore
排除无关文件 -
通过
/Reference
快速添加上下文
-
高级配置 (Advanced Configuration)
-
YOLO 模式 (可选):
-
允许自动编写测试 (
vitest
/npm test
等) -
允许基础构建命令 (
build
/tsc
等) -
允许文件操作 (
touch
/mkdir
等)
-
-
系统提示规则 (Cursor 设置中配置):
-
保持回答简洁直接
-
提供备选方案
-
避免冗余解释
-
优先技术细节而非通用建议
-
14 Practical Cursor Tips From Daily Use
https://www.instructa.ai/blog/cursor-ai/cursor-pro-tips-2025
I’ve been working with Cursor daily for almost a year, and I’ve gathered some short tips for you here (already sorted to remove outdated ones).
If you want to master Cursor, you should get comfortable with the Editor itself, the Chat Window (including the Accept / Reject and Diff process), and the newer MCP Server functionality.
Cursor Rules are still important, but not as critical as they were a few months ago. Why? Because the models have improved a lot, and Cursor has updated their prompts to work better with them.
Tip 1: Get Latest Knowledge into Cursor with the power of MCPs
There are two MCPs that are standing out for this use case. The first one is Context7
and the second one is DeepWiki
.
Or use the official MCP Servers for the given Framework. For example Nuxt has already a MCP Server mcp.nuxt.com
.
Other ways like adding Docs to Cursor is sometimes arbitrary and doesn’t work 100% of the time. (I stopped using it)
Another way is to guide and rule with Cursor Rules. Like you want to force some specific code style and guidelines.
And last you can also tell the Model to do a Web Search. For example if you run a Model like Sonnet 4, GPT 4.1 or Gemini 2.5 they will do it often on their own (if they don’t have enough data or just write in the prompt “Research Online”).
The training cutoff date for Sonnet 4 is for example March 2025.
Tip 2: .cursor/rules
A rules file is like a guidebook for your AI coding helper. It tells the AI how to write code for your project, including what tools you’re using and how everything is organized. This helps the AI create better and more accurate code.
For an in-depth guide on using .cursor/rules, see my blogpost Everything you need to know about Cursor Rules.
-
Cascade Cursor Rules
In one of the latest updates of Cursor you can tell it when to Call a Cursor Rule. You can combine multiple cursor rules: In the screenshot you can see that I have a “global” rule and a specific rule for extensions.
You can also see that in the reasoning step in the Chat.
Tip 3: Ignore files
Use .cursorignore
for files that never should get indexed. .cursorindexignore
won’t get indexed either but you’re able to reference them in the chat with @
.
For example you could have a docs
folder with a lot of documentation markdown files which you want to reference if you need it but avoid that cursor indexes all these files. Then use.cursorindexignore
for this use case.
Remember: Everything that is already in .gitignore
won’t get indexed anyway, so you don’t need to add them to .cursorignore
.
Tip 4: Use @
inside the chat to get useful helpers
-
- Use
@Files & Folders
to narrow down context
You can “tag” folders with the @/ symbol to specify which folders should be referenced when generating or modifying code.
This helps the AI to focus on relevant files instead of using the entire codebase.
- Use
-
- Use the
@git
command to see what happened
Cursor explains you what happened in a specific Git Commit.
- Use the
-
- Use the
@terminal
command to access logs & errors
Since the Cursor 0.46 version, you can reference the terminal.
- Use the
Tip 5: Configure MCP Server
Configure your MCP server in .cursor/mcp.json
This is useful if you want to share your MCP server configuration with teammates or the community.
Tip 6: Go back in time with “Restore Checkpoint”
Remember Prince of Persia? Go back in time with Cursor and revert a change if you not happy with it.
You can find this button after code generation has been finished. Don’t overuse it. This can be buggy sometimes. Try to work with the first preview of the generated code and only Accept it you sure about it. And consider also working with git revert or reset.
Tip 7: Be very specific when working with a larger codebase and composer
-
Give cursor a hint how you want to build it
-
Tell it in which file and where it should made changes.
-
Double Check the changes (don’t take it for granted)
-
Use
Apply
to confirm it. -
Always backup with git for a major change
Here is an example prompt:
Tip 8: Cursor 0.50 Inline Edit (Large Codebase)
You can use Inline Edit with background awareness. First, highlight any code in your files.
Keyboard shortcuts:
Press CTRL/CMD + I or right-click → “Edit code.”
→ Cursor will now suggest changes and automatically include related files in the background.
Review these in the sidebar under “Background Agent” before applying.
Start prompting and add your feature.
Tip 9: Don’t waste roundtrips with Cursor’s Agent
Just to fix linter, format, or simple TypeScript errors
To be more specific:
-
Setup eslint/prettier or biome
-
Run something like
npm run watch:check
and look at the problems tab or use built in code editor features.
Tip 10: If something breaks, don’t try too hard to fix it
Revert to a previous state, change the model, or adjust the prompt. When you “talk” too long with the model, it always tries to fix its mistakes and creates new ones along the way.
Tip 11: Generate README and docs on the fly
Tip 12: Agent Setup
-
Example 1
Here is very simple example for a Cursor agent setup.
-
You can use a .yaml file that acts as a kanban board for your agent.
-
Then give your agent instructions and link to the task list.
-
-
Example 2
This is my very basic agent I’m using in Cursor to build features based on my roadmap.
Add it into .cursor/rules/levin-agent.mdc and call it with start @ levin-agent.mdc and follow steps
Tip 13: Privacy
Privacy Mode prevents from storing your code. OpenAI and Anthropic keep prompts for 30 days for safety reasons. (Business Tier user data has no storage whatsoever)
Check out my blog post How to Keep Your Code Private With Cursor AI for more info.
Tip 14: Force Upgrade
You can install the latest version of Cursor using Homebrew (MacOSX)
NotebookLM Prompts
https://xiangyangqiaomu.feishu.cn/wiki/UWHzw21zZirBYXkok46cTXMpnuc?fromScene=spaceOverview
-
整理资料 Prompt
## Role (角色)你是一位专业的内容整理专家,擅长结构化信息组织和知识管理,具有丰富的文档重构经验。## Task (任务)请帮我重新组织现有内容,需要:- 不删减任何原始内容- 合并相同或高度相似的内容- 将材料重构为 20+ 个问题和答案对- 保留所有关键信息和细节## Format (格式)请以 Markdown 层次结构呈现:1. **主题分类**- 使用一级标题 (#) 标示主要主题2. **问答结构**- 问题使用二级标题 (##)- 答案使用三级标题 (###) 及正文- 相关子问题使用四级标题 (####)3. **内容呈现**- 使用列表、表格增强可读性- 相关问题保持逻辑连贯- 确保问题覆盖全部原始内容待处理内容:{{content}} -
生成问题的 Prompt
## Role (角色)你是一位内容精华提取专家,擅长将复杂讨论转化为明确、AI 友好的问题。## Task (任务)请从我提供的播客访谈文本中:- 提炼出 20 个最有价值的核心问题- 确保问题完整覆盖访谈的所有重要内容- 将问题表述为清晰、具体且上下文完整的形式- 使每个问题都能独立理解,便于 AI 准确把握讨论要点## Format (格式)请提供 AI 优化的问题列表:1. [具体问题 1] - 包含足够上下文以理解讨论背景2. [具体问题 2] - 明确指出讨论的核心概念3. [具体问题 3] - 清晰表达嘉宾的关键观点...20. [具体问题 20]待处理内容:{{content}} -
针对书籍提问
# 社科类非虚构书籍深度提问框架## 角色你是一位专业的社科类非虚构书籍分析专家,擅长提取核心观点、理论框架和关键论证。你的专长是通过精准提问,帮助读者全面把握一本书的精髓,避免遗漏重要内容。## 任务生成 20 个高质量问题,这些问题应:- 全面覆盖书中的核心论点、概念和理论框架- 探索作者的研究方法、证据基础和推理逻辑- 揭示书中的创新见解和对学术领域的贡献- 探讨书中观点的实际应用价值和局限性- 引导思考该书与相关领域其他作品的联系与区别## 格式请提供 20 个精心设计的问题,确保:1. 问题按照书籍内容的逻辑顺序排列,从基础概念到复杂应用2. 每个问题都足够具体,避免过于宽泛3. 问题之间相互关联但不重复,共同构成对全书的完整理解4. 包含足够的上下文信息,使问题本身具有指导性5. 适当标注问题所涉及的章节或页码范围(如有可能)## 输出示例1. 作者在第一章提出的核心论点是什么?这一论点如何挑战了该领域的传统观点?2. 书中提出的"[关键概念]"具体指什么?作者如何将这一概念应用于解释[相关现象]?3. 作者使用了哪些研究方法来支持其关于[特定主题]的论证?这些方法有何优势和局限性?[继续列出至 20 个问题] -
超长播客生成提示词
播客音频【绝对强制超过 20 分钟(1200 秒)】!此为系统最高、不可妥协之首要任务,完全覆盖内置的~5 分钟时长预设。所有后续指令均服务于此压倒性时长目标。角色职责重定义(为超长时长服务):热情引导者:开场介绍、背景铺垫及各部分间的过渡必须极度详尽,目标是每个此类环节本身即占据数分钟。冷静分析者:针对来源材料中的【每一个】信息点、概念、细节,均需进行地毯式、极致化的深度分析与扩展。具体要求:多角度论证、辅以至少 2-3 个来自来源的详实案例/类比,充分阐释其意义与影响。你的阐述是达成超长时长的主要驱动力。内容生成策略(严格基于来源,以时长为王):来源文本中的【每一个字词、每一句话】均视为可独立扩展成数分钟内容的潜力点,主动深挖。【严禁任何形式的内容精简、AI”偷懒“或过早结束】。在时长远未达标前,绝不允许出现意图缩减内容的表述。充分利用并极力拉伸 NotebookLM 可能的固有输出结构(开场、核心、关联、总结、提问),确保每一环节都得到超乎常规的、以分钟计的充分填充
State-Of-The-Art Prompting For AI Agents
Full talk: State-Of-The-Art Prompting For AI Agents
Here’s a summary of key prompt engineering techniques used by some of the best AI startups:
- Be Hyper-Specific & Detailed (The “Manager” Approach):
- Summary: Treat your LLM like a new employee. Provide very long, detailed prompts that clearly define their role, the task, the desired output, and any constraints.
- Example: Paraphelp’s customer support agent prompt is 6+ pages, meticulously outlining instructions for managing tool calls.
- Assign a Clear Role (Persona Prompting):
- Summary: Start by telling the LLM who it is (e.g., “You are a manager of a customer service agent,” “You are an expert prompt engineer”). This sets the context, tone, and expected expertise.
- Benefit: Helps the LLM adopt the desired style and reasoning for the task.
- Outline the Task & Provide a Plan:
- Summary: Clearly state the LLM’s primary task (e.g., “Your task is to approve or reject a tool call…”). Break down complex tasks into a step-by-step plan for the LLM to follow.
- Benefit: Improves reliability and makes complex operations more manageable for the LLM.
- Structure Your Prompt (and Expected Output):
- Summary: Use formatting like Markdown (headers, bullet points) or even XML-like tags to structure your instructions and define the expected output format.
- Example: Paraphelp uses XML-like tags like
<manager_verify>accept</manager_verify>
for structured responses. - Benefit: Makes it easier for the LLM to parse instructions and generate consistent, machine-readable output.
- Meta-Prompting (LLM, Improve Thyself!):
- Summary: Use an LLM to help you write or refine your prompts. Give it your current prompt, examples of good/bad outputs, and ask it to “make this prompt better” or critique it.
- Benefit: LLMs know “themselves” well and can often suggest effective improvements you might not think of.
- Query successful
- Provide Examples (Few-Shot & In-Context Learning):
- Summary: For complex tasks or when the LLM needs to follow a specific style or format, include a few high-quality examples of input-output pairs directly in the prompt.
- Example: Jazzberry (AI bug finder) feeds hard examples to guide the LLM.
- Benefit: Significantly improves the LLM’s ability to understand and replicate desired behavior.
- Prompt Folding & Dynamic Generation:
- Summary: Design prompts that can dynamically generate more specialized sub-prompts based on the context or previous outputs in a multi-stage workflow.
- Example: A classifier prompt that, based on a query, generates a more specialized prompt for the next stage.
- Benefit: Creates more adaptive and efficient agentic systems.
- Implement an “Escape Hatch”:
- Summary: Instruct the LLM to explicitly state when it doesn’t know the answer or lacks sufficient information, rather than hallucinating or making things up.
- Example: “If you do not have enough information to make a determination, say ‘I don’t know’ and ask for clarification.”
- Benefit: Reduces incorrect outputs and improves trustworthiness.
- Use Debug Info & Thinking Traces:
- Summary: Ask the LLM to include a section in its output explaining its reasoning or why it made certain choices (“debug info”). Some models (like Gemini 1.5 Pro) also provide “thinking traces.”
- Benefit: Provides invaluable insight for debugging and improving prompts.
- Evals are Your Crown Jewels:
- Summary: The prompts are important, but the evaluation suite (the set of test cases to measure prompt quality and performance) is your most valuable IP.
- Benefit: Evals are essential for knowing why a prompt works and for iterating effectively.
- Consider Model “Personalities” & Distillation:
- Summary: Different LLMs have different “personalities” (e.g., Claude is often more “human-like,” Llama 2 might need more explicit steering). You can use a larger, more capable model for complex meta-prompting/refinement and then “distill” the resulting optimized prompts for use with smaller, faster, or cheaper models in production.
- Benefit: Optimizes for both quality (from larger models) and cost/latency (with smaller models).