The Team Report leverages a Large Language Model (LLM) to provide insights and recommendations.
Below are the frequently asked questions we have received from customers about how we treat your data and how we use Large Language Models.
Last Updated: 1 December 2025
What is the AI & LLM Functionality in the Easy Agile apps?
Our Easy Agile apps turn organisational Jira data in to anonymised metadata (essentially statistics around the flow of work in a team or group of teams) and then passes that to an LLM with a fixed prompt. The LLM analyses the metadata and provides a response that highlights insights and recommendations based on what it sees in the metadata.
The LLM knows what to look for as we have crafted prompts that are focused on agile and lean principles at both the team and group level. We test and review input/output pairings to gain confidence in the insights highlighted and the recommendations made.
How do the Easy Agile apps use organisational data for the LLM based functionality?
All organisational data remains the property of the customer.
No organisational is sent to the LLM, and as such it is not used in training the LLM.
Easy Agile apps use Jira data in the users browser to calculate metadata, i.e. metrics about the data. For example: we calculate the aggregate time that all selected work items spent ‘In Progress’ and then average that, providing an average cycle time across the period which the LLM then analyses.
Which LLMs are used in the processing of this data?
All LLM processing occurs within our AWS accounts, using models provided by AWS Bedrock.
We use Anthropic’s Claude Sonnet 4 model. This is subject to change as new model versions are released and as we test and improve our prompts.
What guardrails exist to protect against indirect prompt injection attacks or the surfacing of harmful or malicious content?
There is no direct user interaction with the models as all prompts are written and maintained on our backend systems. Our prompts are written to focus on the areas of agile practices, lean principles, scaled agile and program management.
Is customer data used for training LLM models, including suppliers’ and fourth parties’ models?
No, none of the material we provide to AWS Bedrock is used for training purposes. From the AWS Bedrock ‘Data protection’ documentation:
Amazon Bedrock doesn't store or log your prompts and completions. Amazon Bedrock doesn't use your prompts and completions to train any AWS models and doesn't distribute them to third parties.
Is user interaction with LLM / AI stored by processors involved, and for how long?
There is no user interaction with the LLM as organisational metadata is generated, sent to the LLM along with our prompt, and the response is returned to the user.
From the AWS Bedrock ‘Data protection’ documentation:
Amazon Bedrock doesn't store or log your prompts and completions. Amazon Bedrock doesn't use your prompts and completions to train any AWS models and doesn't distribute them to third parties.
How is data protected when stored?
For observability purposes we retain the input and output of each LLM call so that we can evaluate the responses for accuracy. These are stored indefinitely.
Data is encrypted at rest, and follows our security practices.
Is the data anonymised?
Yes, and no personally identifiable information (PII) is sent to the LLM as part of any request.
Does the LLM connect to the internet for generating responses?
No, all LLM processing takes place within AWS Bedrock, and remains within our existing SOC 2 scope.
Are user interactions with these AI / LLM based components logged and available for audit?
There are no direct user interactions with the AI or LLM.