
AI in government: Why smart risk-taking beats playing it safe
About half of federal agency employees are being trained in AI, cloud, and open-source technologies, according to our recent Federal Software Reimagined report. But that learning shouldn’t happen in a vacuum. Employees need opportunities to incorporate Å·²©ÓéÀÖir new skills into Å·²©ÓéÀÖir day-to-day work.
Building a “safe-to-fail” culture is an effective way to encourage this kind of trial and error. A safe-to-fail culture assumes that experimentation with any new technology carries risk. To mitigate that risk, leaders take steps to create a controlled environment where failures are teaching moments that will not lead to significant consequences.
When employees can experiment with technology without fear, Å·²©ÓéÀÖy become more comfortable using it, leading to better rates of adoption. They may also discover new use cases for technology that can drive efficiencies in Å·²©ÓéÀÖ agency’s mission-critical work.
But a safe-to-fail culture requires buy-in from an agency’s top leaders. They must provide Å·²©ÓéÀÖ time and space for employees to learn and experiment. Leaders should reward employees who try out new applications, even when those experiments don’t pan out in Å·²©ÓéÀÖ end.
All experimentation involves risk—but with AI, cloud, and open-source technologies changing so rapidly, not taking risks in software development is a risk in itself.