Regarding AI prompts:
- What personal data is used in AI prompts?
- How do we ensure that prompts only request data necessary for the processing purpose?
- What mechanisms are in place to prevent bias or discrimination in prompts?
- Does the system use sensitive data in its prompts (e.g., health data, ethnic origin)?
- Are regular audits conducted to ensure prompts do not generate unfair or biased decisions?
- Are prompts transparent and explainable to end users?
Regarding the AI model:
- Is the AI system classified as high-risk according to Annex III of the AI Act?
- Has a Data Protection Impact Assessment (DPIA) been conducted?
- What measures have been implemented to mitigate risks identified during impact analysis?
- Does the AI system provide clear explanations for the recommendations or conclusions it generates?
- Are there mechanisms to detect and limit bias in the AI system?
- Is there a monitoring plan to evaluate the system’s performance and compliance after deployment?
- How can data subjects exercise their rights (access, rectification, erasure) regarding data processed by the AI system?
- Is the data sent to the AI model recorded or reused for training purposes?