ai patterns
AI Chat Experience
Help users complete tasks more effectively and efficiently through an AI-powered chat experience. The chat experience can be used to surface relevant insights and accelerate decision-making.
Best Practices
Use these guidelines as a starting point when building AI chat experiences.
Transparency & Clarity
-
Clearly indicate that the user is interacting with an AI
This ensures we are following our core principle of transparency.
-
Be transparent about the purpose and limitations of the chat
All AI interactions have intentional constraints or limitations, so it's important to clarify its capabilities to the user.
-
Avoid overpromising capabilities or scope
Clearly indicate the purpose and limitations of the AI upfront to avoid disappointment and distrust.
User Control & Support
-
Allow users the control to correct mistakes or reset the conversation
If the AI misunderstands, offer clear next steps for error recovery.
-
Allow users to rate responses or submit feedback to help improve the AI when possible.
Different products may have unique technical limitations. We're still in the process of establishing a consistent approach for delivering AI feedback.
-
Avoid trapping users in the conversation with no clear escape route or resolution
Always provide clear exit options, such as a way to restart or connect with a human. If the AI doesn’t understand, be transparent and offer alternative options.
-
Avoid over reliance on automation when the AI is unable to resolve an issue
Provide a way for users to get human support when the AI is no longer helpful. This can be escalating to a human agent or linking to help desk information.
Conversational Quality & Efficiency
-
LLMs (Learning Language Models) should use precise, natural language with conversational tones
Be clear and straightforward, avoid technical language, and format long responses to adhere to our Ripple content guidelines.
- Streamline interactions to reduce steps for user efficiency
-
Aim for quick response times to meet user expectations
Use the chat loader to inform users of system status while the AI processes.
-
Models should leverage their memory of the current conversation to provide relevant responses in order to avoid repetition and redundant information requests.
This fosters trust in the system and enhances the overall experience.
- Avoid generic error handling with no actions to resolve an issue
User Input
This section defines how users initiate and guide conversations with the AI.
Predefined Prompts
These are predefined selectable options used to guide the chat experience in a straightforward, limited capacity.
Best Practices
-
Keep prompts purpose driven, clear, and concise
Clearly indicate the action or information the AI will provide in a short and scannable way.
-
Ensure each prompt is relevant to the current user task or AI response
Prompts should adapt to the conversation as the context evolves.
-
Avoid too many prompts that would overwhelm the user
Limit prompts to around 3-5 at a time.
Custom Prompts
A text input field where users can type questions or commands to prompt responses from the AI.
Best Practices
-
Clearly communicate what custom prompts are accepted and not accepted by the system
This helps set user expectations on the type of prompts they can submit. Use placeholder text to help users understand how to phrase their prompts.
-
Ensure the system communicates any errors
Error messages should clearly explain how to resolve the issue.
-
Ensure users can edit their custom prompts after submission and cancel a query while the system is processing
This enables them to have full control over their requests and can refine or halt the interaction as needed.
AI Output
This section covers the different ways the AI communicates back to the user.
Text Responses
Standard conversational text output from the LLM, providing information or direct answers without requiring further immediate user interaction beyond reading.
Best Practices
-
Ensure responses are formatted appropriately for readability
Break long responses into paragraphs or organize content into lists.
- Stream the text as it loads to signal that it’s being generated by a LLM
-
Allow the text to be copied by including a quick copy action
This enhances the user's efficiency.
Interactive Prompts
AI responses that include interactive elements designed to guide the user's next immediate input or action within the chat conversation. This includes clickable buttons for predefined choices, inline forms, dropdowns, or other widgets that collect further information from the user or trigger a subsequent chat-based action.
Best Practices
-
Clearly define each action to convey its purpose and establish what response to expect
Also ensure each action is directly relevant to the previous prompt.
- Ensure that users have distinct options for submitting or canceling actions, allowing them to maintain control over their choices
-
Avoid overwhelming users with too many choices
Utilize progressive disclosure if there are too many options at once.
-
Do not keep choices active after they have been used or become irrelevant
Dynamically update or remove interactive prompts once they've served their purpose to prevent confusion.
Generated Assets
AI responses that provide the user with a direct link or button to download a file (e.g., a report, document, image, spreadsheet) that the AI has generated or retrieved.
Best Practices
- Include file type information such as type and size
-
Communicate system status including loading states while it’s being generated or downloaded, confirmation of generation or download, and error states if something went wrong
Use the AI loader while it’s being generated or downloaded.
- Include a clear download action
- Communicate to the user if the asset can or cannot be accessed later
- Do not automatically download the file without user action