AI Transparency Statement
In accordance with the Digital Transformation Agency’s (DTA) Policy for responsible use of AI in government, this page presents the Australian Institute of Family Studies’ statement on AI Transparency.
We see significant potential benefits from using AI to improve the analysis and communication of our research and in improving workplace productivity. We commit to using AI in a safe and responsible manner.
When discussing AI, we apply the Organisation for Economic Co-operation and Development (OECD) definition:
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.
Liz Neville, Director
AI use
We may utilise AI across the corporate and enabling, and scientific domains in the analytics for insights and workplace productivity usage patterns.
Analytics for insights
As a research agency, we see the potential benefits in using AI to assist with analysing both primary and secondary data including:
- generating and debugging code used in data analysis, management and processing
- grouping data and performing thematic analysis
- interrogating, analysing and obtaining insights from quantitative data
- performing sentiment analysis across qualitative data
- summarising data across multiple sources.
Workplace productivity
We see the potential benefits in using AI to improve workplace productivity for all staff including:
- helping answer questions from staff regarding workplace policies and entitlements
- improving accessibility to help all staff use platforms, applications and services
- improving the uptake of features in existing products and services
- summarising documents, emails and other content
- performing transcription of interviews and meeting notes
- preparing training material for new and existing staff.
Public interaction and impact
We do not propose to use AI where the public may directly interact with or be significantly impacted by it.
Monitoring AI effectiveness and negative impacts
Executive monitoring
Our Executive Leadership Team have identified the appropriate use of AI as an emerging risk and are actively involved in reviewing potential use cases for AI services.
Responsible AI usage policy
We have developed and continue to maintain an internal Responsible AI usage policy that aligns with advice and guidance provided by the DTA and other agencies for using AI services responsibly.
The policy was first released in October 2023 and applies to all staff, consultants and contractors. The policy requires AI services to be evaluated using specific criteria and approved by our Information Management & Technology (IMT) team prior to usage, and for all users of AI to review and validate any content generated by the services.
Training and assistance
All staff have access to training on the appropriate use of AI services and are encouraged to report concerns to the IMT team. The IMT Team can provide advice and assistance to staff members when needed.
Compliance
We will only utilise AI services in accordance with applicable legislation, regulations, frameworks and policies.
Policy for the responsible use of AI in government
We comply with all mandatory requirements of the policy.
Accountable official
The Chief Information Officer was designated as the accountable official on 20 August 2024.
AI transparency statement
The AI transparency statement was first published to our website on 2 December 2024.
AI contact
For questions about this statement or for further information on the Institute’s usage of AI, please contact [email protected].
Change history
Date | Note |
---|---|
2 December 2024 | Initial release. |