Understand prompt design

Click yourGemini API provider to view provider-specific content and code on this page.


When you make a request to a generative model, you send along aprompt withyour request. By carefully crafting these prompts, you can influence the modelto generate output specific to your needs.

Prompting forGemini models

Prompts forGemini models can contain questions,instructions, contextual information, few-shot examples, and partial input forthe model to complete or continue.

Learn about prompt design in theGemini Developer API documentation:

Tip: You can experiment with prompts and model configurations and rapidlyiterate usingGoogle AI Studio.

Prompting forImagen models

ForImagen, learn aboutspecific prompting strategies and options

Other options to control content generation

  • Configuremodel parameters to control how the model generates a response. ForGemini models, these parameters include max output tokens, temperature, topK, and topP. ForImagen models, these include aspect ratio, person generation, watermarking, etc.
  • Usesafety settings to adjust the likelihood of getting responses that may be considered harmful, including hate speech and sexually explicit content.
  • Setsystem instructions to steer the behavior of the model. This feature is like a preamble that you add before the model gets exposed to any further instructions from the end user.
  • Pass aresponse schema along with the prompt to specify a specific output schema. This feature is most commonly used whengenerating JSON output, but it can also be used forclassification tasks (like when you want the model to use specific labels or tags).

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-10-03 UTC.