Appendix E

Appendix E: Agency reports and evaluations

AgencyReport
Australian Tax Office (ATO)Microsoft 365 Copilot trial Update
Commonwealth Scientific and Industrial Research Organisation (CSIRO)Copilot for Microsoft 365; Data and Insights
Department of Home Affairs (Home Affairs)Copilot Hackathon
Department of Industry, Science and Resources (DISR)DISR Internal Mid-trial Survey Insights

Appendix F

Approach and methodology

Executive summary

Preface

The uptake of publicly available generative artificial intelligence (AI) tools, like ChatGPT, has grown. In the few years since its public introduction, generative artificial intelligence has become available and accessible to millions. 

This meant the Australian Public Service (APS) had to respond quickly to allow its workforce to experiment with generative AI in a safe, responsible and integrated way. To make this experimentation possible, an appropriate generative AI tool needed to be selected. 

This decision was dependent on:

  • how swiftly and seamlessly the tool could be deployed for rapid APS experimentation purposes
  • the ability for staff to experiment and learn using applications familiar to them.

One solution to enable the APS to experiment with safe and responsible generative AI was Microsoft 365 Copilot (formerly Copilot for Microsoft 365). On 16 November 2023, the Australian Government announced a 6-month whole-of-government trial of Copilot. Copilot is a supplementary product that integrates with the existing applications in the Microsoft 365 suite and it’s nested within existing whole-of-government contracting arrangements with Microsoft. This made it a rapid and familiar solution to deploy.

Broadly, the trial and evaluation tested the extent the wider promise of generative AI capabilities would translate into real-world adoption by workers. The results will help the Australian Government consider future opportunities and challenges related to the adoption of generative AI. 

This was the first trial of a generative AI tool in the Australian Government. The future brings exciting opportunities to understand what other tools are available to explore a broad landscape of use cases.

Overarching findings

Evaluation findings, approach and methodology

Evaluation findings

Employee related outcomes

  • 77% were optimistic about Microsoft 365 Copilot at the end of the trial.
  • 1 in 3 used Copilot daily.
  • Over 70% of used Microsoft Teams and Word during the trial, mainly for summarising and re-writing content.
  • 75% of participants who received 3 or more forms of training were confident in their ability to use Copilot, 28 percentage points higher than those who received one form of training.

Most trial participants were positive about Copilot and wish to continue using it 

  • 86% of trial participants wished to continue to use Copilot.
  • Senior Executive Service (SES) staff (93%) and Corporate (81%) roles had the highest positive sentiment towards Copilot.

Despite the positive sentiment, use of Copilot was moderate

Moderate usage was consistent across classifications and job families but specific use cases varied. For example, a higher proportion of SES and Executive Level (EL) 2 staff used meeting summarisation features, compared to other APS classifications.

Microsoft Teams and Word were used most frequently and met participants’ needs. Poor Excel functionality and access issues in Outlook hampered use.

Content summarisation and re-writing were the most used Copilot functions.

Other generative AI tools may be more effective at meeting users’ needs in reviewing or writing code, generating images or searching research databases.

Tailored training and propagation of high-value use cases could drive adoption

Training significantly enhanced confidence in Copilot use and was most effective when it was tailored to an agency’s context.

Identifying specific use cases for Copilot could lead to greater use of Copilot.

Productivity

  • 69% of survey respondents agreed that Copilot improved the speed at which they could complete tasks.
  • 61% agreed that Copilot improved the quality of their work.
  • 40% of survey respondents reported reallocating their time for:
    • mentoring / culture building
    • strategic planning
    • engaging with stakeholders
    • product enhancement.

Most trial participants believed Copilot improved the speed and quality of their work

Improvements in efficiency and quality were perceived to occur in a few tasks with perceived time savings of around an hour a day for these tasks. These tasks include: 

  • summarisation
  • preparing a first draft of a document 
  • information searches. 

Copilot had a negligible impact on certain activities such as communication.

APS 3-6 and EL1 classifications and ICT-related roles experienced the highest time savings of around an hour a day on summarisation, preparing a first draft of a document and information searches.

Around 65% of managers observed an uplift in productivity across their team.

Around 40% of trial participants were able to reallocate their time to higher value activities.

Copilot’s inaccuracy reduced the scale of productivity benefits.

Quality gains were more subdued relative to efficiency gains.

Up to 7% of trial participants reported Copilot added time to activities.

Copilot’s potential unpredictability and lack of contextual knowledge required time spent on output verification and editing which negated some of the efficiency savings.

Whole-of-government adoption of generative AI

61% of managers in the pulse survey could not confidently identify Copilot outputs.

There is a need for agencies to engage in adaptive planning while ensuring governance structures and processes appropriately reflect their risk appetites.

Adoption of generative AI requires a concerted effort to address key barriers.

Technical

There were integration challenges with non-Microsoft 365 applications, particularly JAWS and Janusseal, however it should be noted that such integrations were out of scope for the trial. Note: JAWS is a software product designed to improve the accessibility of written documents. Jannusseal is a data classification tool used to easily distinguish between sensitive and non-sensitive information.

Copilot may magnify poor data security and information management practices.

Capability

Prompt engineering, identifying relevant use cases and understanding the information requirements of Copilot across Microsoft Office products were significant capability barriers.

Legal

Uncertainty regarding the need to disclose Copilot use, accountability for outputs and lack of clarity regarding the remit of Freedom of Information were barriers to Copilot use – particularly in regard to transcriptions.

Cultural

Negative stigmas and ethical concerns associated with generative AI adversely impacted its adoption.

Governance

Adaptive planning is needed to reflect the rolling release cycle nature of generative AI tools, alongside relevant governance structures aligned to agencies’ risk appetites.

Unintended outcomes

Appendix

Approach and methodology

A mixed-methods approach was adopted for the evaluation.

Over 2,000 trial participants from more than 50 agencies contributed to the evaluation. The final report was written based on document/data review, consultations and surveys.

Document/data review

The evaluation synthesised existing evidence, including:

  • government research papers on Copilot and generative AI
  • the trial issue register
  • 6 agency-led internal evaluations.

Consultations

It also involved thematic analysis through:

  • 24 outreach interviews conducted by the DTA
  • 17 focus groups facilitated by Nous Group
  • 8 interviews facilitated by Nous Group.

Surveys

Analysis was conducted on data collected from:

  • 1,556 respondents in pre-use survey
  • 1,159 respondents in pulse survey
  • 831 respondents in post-use survey.
Off
  • A mixed-methods approach was adopted for the evaluation.

    Over 2,000 trial participants from more than 50 agencies contributed to the evaluation. The final report was written based on document/data review, consultations and surveys.

    Document/data review

    The evaluation synthesised existing evidence, including:

    • government research papers on Copilot and generative AI
    • the trial issue register
    • 6 agency-led internal evaluations.

    Consultations

    It also involved thematic analysis through:

    • 24 outreach interviews conducted by the DTA
    • 17 focus groups facilitated by Nous Group
    • 8 interviews facilitated by Nous Group.

    Surveys

    Analysis was conducted on data collected from:

    • 1,556 respondents in pre-use survey
    • 1,159 respondents in pulse survey
    • 831 respondents in post-use survey.

Appendix

Methodological limitations

Evaluation fatigue may have reduced the participation in engagement activities.

Several agencies conducted their own internal evaluations over the course of the trial and did not participate in Digital Transformation Agency’s overall evaluation.

Mitigations: where possible, the evaluation has drawn on agency-specific evaluation to complement findings.

The non-randomised sample of trial participants may not reflect the views of the entire APS.

Participants self-nominated to be involved in the trial, contributing to a degree of selection bias. The representation of APS job families and classifications in the trial differs from the proportions in the overall APS.

Mitigations: the over and underrepresentation of certain groups has been noted. Statistical significance and standard error were calculated, where applicable, to ensure robustness of results.

There was an inconsistent roll out of Copilot across agencies.

Agencies began the trial at different stages, meaning there was not an equal opportunity to build capability or identify use cases. Agencies also used different versions of Copilot due to frequent product releases.

Mitigations: there is a distinction between what may be a functionality limitation of Copilot and when a feature has been disabled by an agency.

Measuring the impact of Copilot relied on trial participants’ self-assessment of productivity benefits.

Trial participants were asked to estimate the scale of Copilot’s benefits, which may naturally under or overestimate its impact.

Mitigations: where possible, the evaluation has compared productivity findings against other evaluations and external research to verify its validity.

Statistical significance of outcomes

The trial of Copilot for Microsoft 365 involved the distribution of nearly 5,765 Copilot licenses across 56 participating agencies. As part of engagement activities — consultations and surveys — the evaluation gathered the experience and sentiment from over 2,000 trial participants representing more than 45 agencies. Insights were further strengthened by the findings from internal evaluations completed by certain agencies. The sample size was sufficient to ensure 95% confidence intervals of reported proportions (at the overall level) were within a margin of error of 5%.

There were 3 questions asked in the post-use survey that were originally included in either the pre-use or pulse survey. These questions were repeated to compare responses of trial participants before and after the survey and measure the change in sentiment. A t-test was used to determine whether changes were statistically significant at a 5% level of significance.

The survey aligned with the APS Job Family Framework and APS job families and classifications were aggregated in survey analysis to reduce standard error and ensure statistical robustness. Post-use survey responses from Trades and Labour, and Monitoring and Audit job families were excluded from reporting as their sample size was less than 10, but their responses were still included in aggregate findings. 

For APS classifications, APS 3-6 have been aggregated.

Survey participation by APS classification and job family

Table A: Aggregation of APS job families for survey analysis
GroupJob families
Corporate

Accounting and Finance

Administration

Communications and Marketing

Human Resources

Information and Knowledge Management

Legal and Parliamentary

ICT and Digital SolutionsICT and Digital Solutions
Policy and Program Management

Policy

Portfolio, Program and Project Management

Service Delivery

Technical

Compliance and Regulation

Data and Research

Engineering and Technical

Intelligence

Science and Health

 

Table B: Participation in surveys according to APS level classification
 Percentage of all APS employeesPercentage of pre-use survey respondentsPercentage of post-use survey respondents
SES1.94.75.3
EL 29.020.020.2
EL 120.836.934.0
APS 623.423.422.3
APS 514.78.59.6
APS 3-426.06.07.4
APS 1-24.210.51.1

 

Table C: Participation in surveys according to job family
 Percentage of all APS employeesPercentage of pre-use survey respondentsPercentage of post-use survey respondents
Accounting and Finance5.15.33.5
Administration11.49.08.9
Communication and Marketing2.54.95.8
Compliance and Regulation10.36.66.5
Data and Research3.79.98.3
Engineering and Technical1.81.31.5
Human Resources3.95.35.0
ICT and Digital Solutions5.019.622.3
Information and Knowledge Management1.12.51.6
Intelligence2.40.92.1
Legal and Parliamentary 2.64.13.5
Monitoring and Audit1.51.11.0
Policy7.913.714.4
Portfolio, Program and Project Management8.38.67.5
Science and Health4.21.62.1
Senior Executive2.12.31.5
Service Delivery25.52.74.0
Trades and Labour0.70.9-

 

Participating agencies

Table D: List of participating agencies by portfolio
PortfolioEntity
Agriculture, Fisheries and Forestry

Department of Agriculture, Fisheries and Forestry

Grains Research and Development Corporation

Regional Investment Corporation

Rural Industries Research and Development (trading as AgriFutures Australia)

Attorney-General’s

Australian Criminal Intelligence Commission

Australian Federal Police

Australian Financial Security Authority

Office of the Commonwealth Ombudsman

Climate Change, Energy, the Environment and Water

Australian Institute of Marine Science

Australian Renewable Energy Agency

Department of Climate Change, Energy, Environment and Water

Bureau of Meteorology

Education

Australian Research Council

Department of Education

Tertiary Education Quality and Standards Agency

Employment and Workplace Relations

Comcare

Department of Employment and Workplace Relations

Fair Work Commission

Finance

Commonwealth Superannuation Corporation

Department of Finance

Digital Transformation Agency

Foreign and Trade Affairs

Australian Centre for International Agricultural Research

Australian Trade and Investment Commission

Department of Foreign Affairs and Trade

Tourism Australia

Health and Aged Care

Australian Digital Health Agency

Australian Institute of Health and Welfare

Department of Health and Aged Care

Home AffairsDepartment of Home Affairs (Immigration and Border Protection)
Industry, Science and Resources

Australian Building Codes Board

Australian Nuclear Science and Technology Organisation

Commonwealth Scientific and Industrial Research Organisation

Department of Industry, Science and Resources

Geoscience Australia

IP Australia

Infrastructure, Transport, Regional Development, Communication and the ArtsAustralian Transport Safety Bureau
Parliamentary Departments (not a portfolio)Department of Parliamentary Services
Social Services

Australian Institute of Family Studies

National Disability Insurance Agency

Treasury

Australian Prudential Regulation Authority

Australian Securities and Investments Commission

Australian Charities and Not-for-profits Commission

Australian Taxation Office

Department of the Treasury

Productivity Commission

Connect with the digital community

Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.