Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Prompt engineering

From Wikipedia, the free encyclopedia
Structuring text as input to generative artificial intelligence

Prompt engineering is the process of structuring or crafting an instruction in order to produce better outputs from agenerative artificial intelligence (AI) model.[1]

Aprompt isnatural language text describing the task that an AI should perform.[2] A prompt for a text-to-textlanguage model can be a query, a command, or a longer statement including context, instructions, and conversation history. Prompt engineering may involve phrasing a query, specifying a style, choice of words and grammar,[3] providing relevant context, or describing a character for the AI to mimic.[1]

When communicating with atext-to-image or a text-to-audio model, a typical prompt is a description of a desired output such as "a high-quality photo of an astronaut riding a horse"[4] or "Lo-fi slow BPM electro chill with organic samples".[5] Prompting a text-to-image model may involve adding, removing, or emphasizing words to achieve a desired subject, style, layout, lighting, and aesthetic.[6]

History

[edit]

In 2018, researchers first proposed that all previously separate tasks innatural language processing (NLP) could be cast as a question-answering problem over a context. In addition, they trained a first single, joint, multi-task model that would answer any task-related question like "What is the sentiment" or "Translate this sentence to German" or "Who is the president?"[7]

In 2025, researchers proposed a reflexive prompt engineering framework that incorporates ethical and governance considerations into prompt design and management.[8]

TheAI boom saw an increase in the amount of "prompting technique" to get the model to output the desired outcome and avoidnonsensical output, a process characterized bytrial-and-error.[9] After the release ofChatGPT in 2022, prompt engineering was soon seen as an important business skill, albeit one with an uncertain economic future.[1]

A repository for prompts reported that over 2,000 public prompts for around 170 datasets were available in February 2022.[10] In 2022, thechain-of-thought prompting technique was proposed byGoogle researchers.[11][12] In 2023, several text-to-text and text-to-image prompt databases were made publicly available.[13][14] The Personalized Image-Prompt (PIP) dataset, a generated image-text dataset that has been categorized by 3,115 users, has also been made available publicly in 2024.[15]

Text-to-text

[edit]

Multiple distinct prompt engineering techniques have been published.

Chain-of-thought

[edit]
See also:Reflection (artificial intelligence)

According to Google Research,chain-of-thought (CoT) prompting is a technique that allowslarge language models (LLMs) to solve a problem as a series of intermediate steps before giving a final answer. In 2022,Google Brain reported that chain-of-thought prompting improvesreasoning ability by inducing the model to answer a multi-step problem with steps of reasoning that mimic atrain of thought.[11][16] Chain-of-thought techniques were developed to help LLMs handle multi-step reasoning tasks, such asarithmetic orcommonsense reasoning questions.[17][18]

For example, given the question, "Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?", Google claims that a CoT prompt might induce the LLM to answer "A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9."[11] When applied toPaLM, a 540 billion parameterlanguage model, according to Google, CoT prompting significantly aided the model, allowing it to perform comparably with task-specificfine-tuned models on several tasks, achievingstate-of-the-art results at the time on the GSM8Kmathematical reasoningbenchmark.[11] It is possible to fine-tune models on CoT reasoning datasets to enhance this capability further and stimulate betterinterpretability.[19][20]

As originally proposed by Google,[11] each CoT prompt is accompanied by a set of input/output examples—calledexemplars—to demonstrate the desired model output, making it afew-shot prompting technique. However, according to a later paper from researchers at Google and theUniversity of Tokyo, simply appending the words "Let's think step-by-step"[21] was also effective, which allowed for CoT to be employed as azero-shot technique.

An example format offew-shot CoT prompting with in-context exemplars:[22]

   Q: {example question 1}   A: {example answer 1}   ...   Q: {example questionn}   A: {example answern}      Q: {question}   A: {LLM output}

An example format ofzero-shot CoT prompting:[21]

   Q: {question}. Let's think step by step.   A: {LLM output}

In-context learning

[edit]

In-context learning, refers to a model's ability to temporarily learn from prompts. For example, a prompt may include a few examples for a model to learn from, such as asking the model to complete "maison → house,chat → cat,chien →" (the expected response beingdog),[23] an approach calledfew-shot learning.[24]

In-context learning is anemergent ability[25] of large language models. It is an emergent property of model scale, meaning thatbreaks[26] in downstream scaling laws occur, leading to its efficacy increasing at a different rate in larger models than in smaller models.[25][11] Unlike training andfine-tuning, which produce lasting changes, in-context learning is temporary.[27] Training models to perform in-context learning can be viewed as a form ofmeta-learning, or "learning to learn".[28]

Self-Consistency

[edit]

Self-Consistency performs several chain-of-thought rollouts, then selects the most commonly reached conclusion out of all the rollouts.[29][30]

Tree-of-thought

[edit]

Tree-of-thought prompting generalizes chain-of-thought by generating multiple lines of reasoning in parallel, with the ability to backtrack or explore other paths. It can usetree search algorithms likebreadth-first,depth-first, orbeam.[30][31]

Prompting to estimate model sensitivity

[edit]

Research consistently demonstrates that LLMs are highly sensitive to subtle variations in prompt formatting, structure, and linguistic properties. Some studies have shown up to 76 accuracy points across formatting changes in few-shot settings.[32] Linguistic features significantly influence prompt effectiveness—such as morphology, syntax, and lexico-semantic changes—which meaningfully enhance task performance across a variety of tasks.[3][33] Clausal syntax, for example, improves consistency and reduces uncertainty in knowledge retrieval.[34] This sensitivity persists even with larger model sizes, additional few-shot examples, or instruction tuning.

To address sensitivity of models and make them more robust, several methods have been proposed. FormatSpread facilitates systematic analysis by evaluating a range of plausible prompt formats, offering a more comprehensive performance interval.[32] Similarly, PromptEval estimates performance distributions across diverse prompts, enabling robust metrics such as performance quantiles and accurate evaluations under constrained budgets.[35]

Automatic prompt generation

[edit]

Retrieval-augmented generation

[edit]
Main article:Retrieval-augmented generation

Retrieval-augmented generation (RAG) is a technique that enablesgenerative artificial intelligence (Gen AI) models to retrieve and incorporate new information. It modifies interactions with an LLM so that the model responds to user queries with reference to a specified set of documents, using this information to supplement information from its pre-existingtraining data. This allows LLMs to use domain-specific and/or updated information.[36]

RAG improves large language models by incorporatinginformation retrieval before generating responses. Unlike traditional LLMs that rely on static training data, RAG pulls relevant text from databases, uploaded documents, or web sources. According toArsTechnica, "RAG is a way of improving LLM performance, in essence by blending the LLM process with a web search or other document look-up process to help LLMs stick to the facts." This method helps reduceAI hallucinations, which have led to real-world issues like chatbots inventing policies or lawyers citing nonexistent legal cases. By dynamically retrieving information, RAG enables AI to provide more accurate responses without frequent retraining.[37]

Graph retrieval-augmented generation

[edit]
GraphRAG with a knowledge graph combining access patterns for unstructured, structured, and mixed data

GraphRAG (coined byMicrosoft Research) is a technique that extends RAG with the use of a knowledge graph (usually, LLM-generated) to allow the model to connect disparate pieces of information, synthesize insights, and holistically understand summarized semantic concepts over large data collections. It was shown to be effective on datasets like the Violent Incident Information from News Articles (VIINA).[38][39]

Earlier work showed the effectiveness of using aknowledge graph for question answering using text-to-query generation.[40] These techniques can be combined to search across both unstructured and structured data, providing expanded context, and improved ranking.

Using language models to generate prompts

[edit]

LLMs themselves can be used to compose prompts for LLMs.[41] Theautomatic prompt engineer algorithm uses one LLM tobeam search over prompts for another LLM:[42][43]

  • There are two LLMs. One is the target LLM, and another is the prompting LLM.
  • Prompting LLM is presented with example input-output pairs, and asked to generate instructions that could have caused a model following the instructions to generate the outputs, given the inputs.
  • Each of the generated instructions is used to prompt the target LLM, followed by each of the inputs. The log-probabilities of the outputs are computed and added. This is the score of the instruction.
  • The highest-scored instructions are given to the prompting LLM for further variations.
  • Repeat until some stopping criteria is reached, then output the highest-scored instructions.

CoT examples can be generated by LLM themselves. In "auto-CoT", a library of questions are converted to vectors by a model such asBERT. The question vectors areclustered. Questions close to thecentroid of each cluster are selected, in order to have a subset of diverse questions. An LLM does zero-shot CoT on each selected question. The question and the corresponding CoT answer are added to a dataset of demonstrations. These diverse demonstrations can then added to prompts for few-shot learning.[44]

Automatic prompt optimization

[edit]

Automatic prompt optimization techniques refine prompts for LLMs using test datasets and comparison metrics to determine whether changes improve performance. Methods such as MiPRO (Minimum Perturbation Prompt Optimization) update prompts with minimal edits,[45] while GEPA (Gradient-based Prompt Augmentation) applies gradient signals over model likelihoods.[46] There are also open-source implementations of such algorithms in frameworks like DSPy[47] and Opik.[48]

Text-to-image

[edit]
See also:Artificial intelligence visual art § Prompt engineering and sharing, andArtificial intelligence visual art

In 2022,text-to-image models likeDALL-E 2,Stable Diffusion, andMidjourney were released to the public. These models take text prompts as input and use them to generate images.[49][6]

Demonstration of the effect of negative prompts on images generated withStable Diffusion
  • Top: no negative prompt
  • Centre: "green trees"
  • Bottom: "round stones, round rocks"

Prompt formats

[edit]

Early text-to-image models typically do not understand negation, grammar and sentence structure in the same way aslarge language models, and may thus require a different set of prompting techniques. The prompt "a party with no cake" may produce an image including a cake.[50] As an alternative,negative prompts allow a user to indicate, in a separate prompt, which terms shouldnot appear in the resulting image.[51] Techniques such as framing the normal prompt into asequence-to-sequence language modeling problem can be used to automatically generate an output for the negative prompt.[52]

A text-to-image prompt commonly includes a description of the subject of the art, the desired medium (such asdigital painting orphotography), style (such ashyperrealistic orpop-art), lighting (such asrim lighting orcrepuscular rays), color, and texture.[53] Word order also affects the output of a text-to-image prompt. Words closer to the start of a prompt may be emphasized more heavily.[54]

TheMidjourney documentation encourages short, descriptive prompts: instead of "Show me a picture of lots of blooming California poppies, make them bright, vibrant orange, and draw them in an illustrated style with colored pencils", an effective prompt might be "Bright orange California poppies drawn with colored pencils".[50]

Artist styles

[edit]

Some text-to-image models are capable of imitating the style of particular artists by name. For example, the phrasein the style of Greg Rutkowski has been used in Stable Diffusion and Midjourney prompts to generate images in the distinctive style of Polish digital artistGreg Rutkowski.[55] Famous artists such asVincent van Gogh andSalvador Dalí have also been used for styling and testing.[56]

Non-text prompts

[edit]

Some approaches augment or replace natural language text prompts with non-text input.

Textual inversion and embeddings

[edit]

For text-to-image models,textual inversion performs an optimization process to create a newword embedding based on a set of example images. This embedding vector acts as a "pseudo-word" which can be included in a prompt to express the content or style of the examples.[57]

Image prompting

[edit]

In 2023,Meta's AI research released Segment Anything, acomputer vision model that can performimage segmentation by prompting. As an alternative to text prompts, Segment Anything can accept bounding boxes, segmentation masks, and foreground/background points.[58]

Using gradient descent to search for prompts

[edit]

In "prefix-tuning",[59] "prompt tuning", or "soft prompting",[60] floating-point-valued vectors are searched directly bygradient descent to maximize the log-likelihood on outputs.

Formally, letE={e1,,ek}{\displaystyle \mathbf {E} =\{\mathbf {e_{1}} ,\dots ,\mathbf {e_{k}} \}} be a set of soft prompt tokens (tunable embeddings), whileX={x1,,xm}{\displaystyle \mathbf {X} =\{\mathbf {x_{1}} ,\dots ,\mathbf {x_{m}} \}} andY={y1,,yn}{\displaystyle \mathbf {Y} =\{\mathbf {y_{1}} ,\dots ,\mathbf {y_{n}} \}} be the token embeddings of the input and output respectively. During training, the tunable embeddings, input, and output tokens are concatenated into a single sequenceconcat(E;X;Y){\displaystyle {\text{concat}}(\mathbf {E} ;\mathbf {X} ;\mathbf {Y} )}, and fed to the LLMs. Thelosses are computed over theY{\displaystyle \mathbf {Y} } tokens; the gradients arebackpropagated to prompt-specific parameters: in prefix-tuning, they are parameters associated with the prompt tokens at each layer; in prompt tuning, they are merely the soft tokens added to the vocabulary.[61]

More formally, this is prompt tuning. Let an LLM be written asLLM(X)=F(E(X)){\displaystyle LLM(X)=F(E(X))}, whereX{\displaystyle X} is a sequence of linguistic tokens,E{\displaystyle E} is the token-to-vector function, andF{\displaystyle F} is the rest of the model. In prefix-tuning, one provides a set of input-output pairs{(Xi,Yi)}i{\displaystyle \{(X^{i},Y^{i})\}_{i}}, and then use gradient descent to search forargmaxZ~ilogPr[Yi|Z~E(Xi)]{\displaystyle \arg \max _{\tilde {Z}}\sum _{i}\log Pr[Y^{i}|{\tilde {Z}}\ast E(X^{i})]}. In words,logPr[Yi|Z~E(Xi)]{\displaystyle \log Pr[Y^{i}|{\tilde {Z}}\ast E(X^{i})]} is the log-likelihood of outputtingYi{\displaystyle Y^{i}}, if the model first encodes the inputXi{\displaystyle X^{i}} into the vectorE(Xi){\displaystyle E(X^{i})}, then prepend the vector with the "prefix vector"Z~{\displaystyle {\tilde {Z}}}, then applyF{\displaystyle F}. For prefix tuning, it is similar, but the "prefix vector"Z~{\displaystyle {\tilde {Z}}} is pre-appended to the hidden states in every layer of the model.[citation needed]

An earlier result uses the same idea of gradient descent search, but is designed for masked language models like BERT, and searches only over token sequences, rather than numerical vectors. Formally, it searches forargmaxX~ilogPr[Yi|X~Xi]{\displaystyle \arg \max _{\tilde {X}}\sum _{i}\log Pr[Y^{i}|{\tilde {X}}\ast X^{i}]} whereX~{\displaystyle {\tilde {X}}} is ranges over token sequences of a specified length.[62]

Limitations

[edit]

While the process of writing and refining a prompt for an LLM or generative AI shares some parallels with an iterative engineering design process, such as through discovering 'best principles' to reuse and discovery through reproducible experimentation, the actual learned principles and skills depend heavily on the specific model being learned rather than being generalizable across the entire field of prompt-based generative models. Such patterns are also volatile and exhibit significantly different results from seemingly insignificant prompt changes.[63][64] According toThe Wall Street Journal in 2025, the job of prompt engineer was one of the hottest in 2023, but has become obsolete due to models that better intuit user intent and to company trainings.[65]

Prompt injection

[edit]
Main article:Prompt injection
See also:SQL injection,Cross-site scripting, andSocial engineering (security)

Prompt injection is acybersecurity exploit in which adversaries craft inputs that appear legitimate but are designed to cause unintended behavior inmachine learning models, particularly large language models. This attack takes advantage of the model's inability to distinguish between developer-defined prompts and user inputs, allowing adversaries to bypass safeguards and influence model behaviour. While LLMs are designed to follow trusted instructions, they can be manipulated into carrying out unintended responses through carefully crafted inputs.[66][67]

References

[edit]
  1. ^abcGenkina, Dina (March 6, 2024)."AI Prompt Engineering is Dead: Long live AI prompt engineering".IEEE Spectrum. RetrievedJanuary 18, 2025.
  2. ^Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David;Amodei, Dario;Sutskever, Ilya (2019)."Language Models are Unsupervised Multitask Learners"(PDF). OpenAI.We demonstrate language models can perform down-stream tasks in a zero-shot setting – without any parameter or architecture modification
  3. ^abWahle, Jan Philip; Ruas, Terry; Xu, Yang; Gipp, Bela (2024)."Paraphrase Types Elicit Prompt Engineering Capabilities". In Al-Onaizan, Yaser; Bansal, Mohit; Chen, Yun-Nung (eds.).Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing. Miami, Florida, USA: Association for Computational Linguistics. pp. 11004–11033.arXiv:2406.19898.doi:10.18653/v1/2024.emnlp-main.617.
  4. ^Heaven, Will Douglas (April 6, 2022)."This horse-riding astronaut is a milestone on AI's long road towards understanding".MIT Technology Review. RetrievedAugust 14, 2023.
  5. ^Wiggers, Kyle (June 12, 2023)."Meta open sources an AI-powered music generator". TechCrunch. RetrievedAugust 15, 2023.Next, I gave a more complicated prompt to attempt to throw MusicGen for a loop: "Lo-fi slow BPM electro chill with organic samples."
  6. ^abMittal, Aayush (July 27, 2023)."Mastering AI Art: A Concise Guide to Midjourney and Prompt Engineering".Unite.AI. RetrievedMay 9, 2025.
  7. ^McCann, Bryan; Keskar, Nitish; Xiong, Caiming; Socher, Richard (June 20, 2018).The Natural Language Decathlon: Multitask Learning as Question Answering. ICLR.arXiv:1806.08730.
  8. ^Djeffal, C. (2025). Reflexive Prompt Engineering: A Framework for Responsible Prompt Engineering and Interaction Design. arXiv preprint arXiv:2504.16204.https://arxiv.org/abs/2504.16204
  9. ^Knoth, Nils; Tolzin, Antonia; Janson, Andreas; Leimeister, Jan Marco (June 1, 2024)."AI literacy and its implications for prompt engineering strategies".Computers and Education: Artificial Intelligence.6 100225.doi:10.1016/j.caeai.2024.100225.ISSN 2666-920X.
  10. ^PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts. Association for Computational Linguistics. 2022.
  11. ^abcdefWei, Jason; Wang, Xuezhi; Schuurmans, Dale; Bosma, Maarten; Ichter, Brian; Xia, Fei; Chi, Ed H.; Le, Quoc V.; Zhou, Denny (October 31, 2022).Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems (NeurIPS 2022). Vol. 35.arXiv:2201.11903.
  12. ^Brubaker, Ben (March 21, 2024)."How Chain-of-Thought Reasoning Helps Neural Networks Compute".Quanta Magazine. RetrievedMay 9, 2025.
  13. ^Chen, Brian X. (June 23, 2023)."How to Turn Your Chatbot Into a Life Coach".The New York Times.
  14. ^Chen, Brian X. (May 25, 2023)."Get the Best From ChatGPT With These Golden Prompts".The New York Times.ISSN 0362-4331. RetrievedAugust 16, 2023.
  15. ^Chen, Zijie; Zhang, Lichao; Weng, Fangsheng; Pan, Lili; Lan, Zhenzhong (June 16, 2024)."Tailored Visions: Enhancing Text-to-Image Generation with Personalized Prompt Rewriting".2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 7727–7736.arXiv:2310.08129.doi:10.1109/cvpr52733.2024.00738.ISBN 979-8-3503-5300-6.
  16. ^Narang, Sharan; Chowdhery, Aakanksha (April 4, 2022)."Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance".ai.googleblog.com.
  17. ^Dang, Ekta (February 8, 2023)."Harnessing the power of GPT-3 in scientific research".VentureBeat. RetrievedMarch 10, 2023.
  18. ^Montti, Roger (May 13, 2022)."Google's Chain of Thought Prompting Can Boost Today's Best Algorithms".Search Engine Journal. RetrievedMarch 10, 2023.
  19. ^"Scaling Instruction-Finetuned Language Models"(PDF).Journal of Machine Learning Research. 2024.
  20. ^Wei, Jason; Tay, Yi (November 29, 2022)."Better Language Models Without Massive Compute".ai.googleblog.com. RetrievedMarch 10, 2023.
  21. ^abKojima, Takeshi; Shixiang Shane Gu; Reid, Machel; Matsuo, Yutaka; Iwasawa, Yusuke (2022). "Large Language Models are Zero-Shot Reasoners".NeurIPS.arXiv:2205.11916.
  22. ^weipaper
  23. ^Garg, Shivam; Tsipras, Dimitris; Liang, Percy; Valiant, Gregory (2022). "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes".NeurIPS.arXiv:2208.01066.
  24. ^Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared D.; Dhariwal, Prafulla; Neelakantan, Arvind (2020). "Language models are few-shot learners".Advances in Neural Information Processing Systems.33:1877–1901.arXiv:2005.14165.
  25. ^abWei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten; Zhou, Denny; Metzler, Donald; Chi, Ed H.; Hashimoto, Tatsunori; Vinyals, Oriol; Liang, Percy; Dean, Jeff; Fedus, William (October 2022). "Emergent Abilities of Large Language Models".Transactions on Machine Learning Research.arXiv:2206.07682.In prompting, a pre-trained language model is given a prompt (e.g. a natural language instruction) of a task and completes the response without any further training or gradient updates to its parameters... The ability to perform a task via few-shot prompting is emergent when a model has random performance until a certain scale, after which performance increases to well-above random
  26. ^Caballero, Ethan; Gupta, Kshitij; Rish, Irina; Krueger, David (2023). "Broken Neural Scaling Laws".ICLR.arXiv:2210.14891.
  27. ^Musser, George."How AI Knows Things No One Told It".Scientific American. RetrievedMay 17, 2023.By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users' prompts—an ability known as in-context learning.
  28. ^Garg, Shivam; Tsipras, Dimitris; Liang, Percy; Valiant, Gregory (2022). "What Can Transformers Learn In-Context? A Case Study of Simple Function Classes".NeurIPS.arXiv:2208.01066.Training a model to perform in-context learning can be viewed as an instance of the more general learning-to-learn or meta-learning paradigm
  29. ^Self-Consistency Improves Chain of Thought Reasoning in Language Models. ICLR. 2023.arXiv:2203.11171.
  30. ^abMittal, Aayush (May 27, 2024)."Latest Modern Advances in Prompt Engineering: A Comprehensive Guide".Unite.AI. RetrievedMay 8, 2025.
  31. ^Tree of Thoughts: Deliberate Problem Solving with Large Language Models. NeurIPS. 2023.arXiv:2305.10601.
  32. ^abQuantifying Language Models' Sensitivity to Spurious Features in Prompt Design or: How I learned to start worrying about prompt formatting. ICLR. 2024.arXiv:2310.11324.
  33. ^Leidinger, Alina; van Rooij, Robert; Shutova, Ekaterina (2023). Bouamor, Houda; Pino, Juan; Bali, Kalika (eds.)."The language of prompting: What linguistic properties make a prompt successful?".Findings of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics:9210–9232.arXiv:2311.01967.doi:10.18653/v1/2023.findings-emnlp.618.
  34. ^Linzbach, Stephan; Dimitrov, Dimitar; Kallmeyer, Laura; Evang, Kilian; Jabeen, Hajira; Dietze, Stefan (June 2024)."Dissecting Paraphrases: The Impact of Prompt Syntax and supplementary Information on Knowledge Retrieval from Pretrained Language Models". In Duh, Kevin; Gomez, Helena; Bethard, Steven (eds.).Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Mexico City, Mexico: Association for Computational Linguistics. pp. 3645–3655.arXiv:2404.01992.doi:10.18653/v1/2024.naacl-long.201.
  35. ^Efficient multi-prompt evaluation of LLMs. NeurIPS. 2024.arXiv:2405.17202.
  36. ^"Why Google's AI Overviews gets things wrong".MIT Technology Review. May 31, 2024. RetrievedMarch 7, 2025.
  37. ^"Can a technology called RAG keep AI models from making stuff up?".Ars Technica. June 6, 2024. RetrievedMarch 7, 2025.
  38. ^Larson, Jonathan; Truitt, Steven (February 13, 2024),GraphRAG: Unlocking LLM discovery on narrative private data, Microsoft
  39. ^"An Introduction to Graph RAG".KDnuggets. RetrievedMay 9, 2025.
  40. ^Sequeda, Juan; Allemang, Dean; Jacob, Bryon (2023). "A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases".Grades-Nda.arXiv:2311.07509.
  41. ^Explaining Patterns in Data with Language Models via Interpretable Autoprompting(PDF). BlackboxNLP Workshop. 2023.arXiv:2210.01848.
  42. ^Large Language Models are Human-Level Prompt Engineers. ICLR. 2023.arXiv:2211.01910.
  43. ^Pryzant, Reid; Iter, Dan; Li, Jerry; Lee, Yin Tat; Zhu, Chenguang; Zeng, Michael (2023)."Automatic Prompt Optimization with "Gradient Descent" and Beam Search".Conference on Empirical Methods in Natural Language Processing:7957–7968.arXiv:2305.03495.doi:10.18653/v1/2023.emnlp-main.494.
  44. ^Automatic Chain of Thought Prompting in Large Language Models. ICLR. 2023.arXiv:2210.03493.
  45. ^Optimizing Instructions and Demonstrations for Multi-Stage Language Model Programs. ACL. 2023.arXiv:2406.11695.
  46. ^GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning. NeurIPS. 2023.arXiv:2507.19457.
  47. ^DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines. NeurIPS. 2023.arXiv:2310.03714.
  48. ^"Introducing Opik: Prompt Optimization by Evaluation".comet.com. 2024.
  49. ^Goldman, Sharon (January 5, 2023)."Two years after DALL-E debut, its inventor is "surprised" by impact".VentureBeat. RetrievedMay 9, 2025.
  50. ^ab"Prompts".docs.midjourney.com. RetrievedAugust 14, 2023.
  51. ^"Why Does This Horrifying Woman Keep Appearing in AI-Generated Images?".VICE. September 7, 2022. RetrievedMay 9, 2025.
  52. ^Goldblum, R.; Pillarisetty, R.; Dauphinee, M. J.; Talal, N. (1975)."Acceleration of autoimmunity in NZB/NZW F1 mice by graft-versus-host disease".Clinical and Experimental Immunology.19 (2):377–385.ISSN 0009-9104.PMC 1538084.PMID 2403.
  53. ^"Stable Diffusion prompt: a definitive guide". May 14, 2023. RetrievedAugust 14, 2023.
  54. ^Diab, Mohamad; Herrera, Julian; Chernow, Bob (October 28, 2022)."Stable Diffusion Prompt Book"(PDF). RetrievedAugust 7, 2023.Prompt engineering is the process of structuring words that can be interpreted and understood by atext-to-image model. Think of it as the language you need to speak in order to tell an AI model what to draw.
  55. ^Heikkilä, Melissa (September 16, 2022)."This Artist Is Dominating AI-Generated Art and He's Not Happy About It".MIT Technology Review. RetrievedAugust 14, 2023.
  56. ^Solomon, Tessa (August 28, 2024)."The AI-Powered Ask Dalí and Hello Vincent Installations Raise Uncomfortable Questions about Ventriloquizing the Dead".ARTnews.com. RetrievedJanuary 10, 2025.
  57. ^Gal, Rinon; Alaluf, Yuval; Atzmon, Yuval; Patashnik, Or; Bermano, Amit H.; Chechik, Gal; Cohen-Or, Daniel (2023). "An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion".ICLR.arXiv:2208.01618.Using only 3-5 images of a user-provided concept, like an object or a style, we learn to represent it through new "words" in the embedding space of a frozen text-to-image model.
  58. ^Segment Anything(PDF). ICCV. 2023.
  59. ^Li, Xiang Lisa; Liang, Percy (2021). "Prefix-Tuning: Optimizing Continuous Prompts for Generation".Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). pp. 4582–4597.doi:10.18653/V1/2021.ACL-LONG.353.S2CID 230433941.In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning... Prefix-tuning draws inspiration from prompting
  60. ^Lester, Brian; Al-Rfou, Rami; Constant, Noah (2021). "The Power of Scale for Parameter-Efficient Prompt Tuning".Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. pp. 3045–3059.arXiv:2104.08691.doi:10.18653/V1/2021.EMNLP-MAIN.243.S2CID 233296808.In this work, we explore "prompt tuning," a simple yet effective mechanism for learning "soft prompts"...Unlike the discrete text prompts used by GPT-3, soft prompts are learned through back-propagation
  61. ^How Does In-Context Learning Help Prompt Tuning?. EACL. 2024.arXiv:2302.11521.
  62. ^Shin, Taylor; Razeghi, Yasaman; Logan IV, Robert L.; Wallace, Eric; Singh, Sameer (November 2020)."AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts".Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics. pp. 4222–4235.doi:10.18653/v1/2020.emnlp-main.346.S2CID 226222232.
  63. ^Meincke, Lennart and Mollick, Ethan R. and Mollick, Lilach and Shapiro, Dan, Prompting Science Report 1: Prompt Engineering is Complicated and Contingent (March 04, 2025). Available at SSRN:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5165270
  64. ^"'AI is already eating its own': Prompt engineering is quickly going extinct".Fast Company. May 6, 2025.
  65. ^Bousquette, Isabelle (April 25, 2025)."The Hottest AI Job of 2023 Is Already Obsolete".Wall Street Journal.ISSN 0099-9660. RetrievedMay 7, 2025.
  66. ^Vigliarolo, Brandon (September 19, 2022)."GPT-3 'prompt injection' attack causes bot bad manners".The Register. RetrievedFebruary 9, 2023.
  67. ^"What is a prompt injection attack?".IBM. March 26, 2024. RetrievedMarch 7, 2025.
Scholia has atopic profile forPrompt engineering.
Concepts
Models
Text
Coding
Image
Video
Speech
Music
Agents
Companies
Controversies
Concepts
Applications
Implementations
Audio–visual
Text
Decisional
People
Architectures
Retrieved from "https://en.wikipedia.org/w/index.php?title=Prompt_engineering&oldid=1318969393"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2025 Movatter.jp