Software Development Life Cycle: (SDLC) A value driven approach A departmental approach to delivering successful digital solutions. TL;DR Our SDLC is a value-driven framework for delivering and managing digital solutions that are fast, safe, and centred on real user needs. It balances strategy, structure, and agility - empowering teams to build meaningful, evolving products through continuous quality, collaboration, and purpose.
Start with WHY To deliver and manage better digital solutions - faster, safer, and with purpose. Driven by our values At its heart, SDLC not just about writing code or following process. It’s about solving real problems for real people - through technology that’s responsive, inclusive, and sustainable. https://www.forgov.qld.gov.au/ 1. Customers first: We build with, not just for, our users - placing their needs at the centre of every decision. 2. Ideas in action: We test, learn, and adapt quickly - embracing iteration and innovation to deliver what works. 3. Unleash potential: We empower teams to own their craft, grow their capabilities, and challenge the status quo. 4. Be courageous: We challenge outdated practices and simplify where complexity holds us back. 5. Empower people: We build systems that serve people - our colleagues, stakeholders, and the communities we support.
What is a SDLC? Before we try to define how to implement SDLC, we should unpack what it is. • Software – not just code, but a digital product created to solve real problems. • Delivery – the act of placing something tangible and desirable into the hands of those who want it, when they need it. • Life – The digital product is alive. It does not exist in a vacuum. It should evolve, adapt and respond to its environment. • Cycle – Development is not linear but cyclic. It will move and loop through various phases during its existence. SDLC can be described as a combination of strategy, framework and methodology. • SDLC as a strategy: The organisational intent and purpose behind how products are delivered sustainably and successfully. • SDLC as a framework: The structure that outlines the stages of software development from idea to retirement. • SDLC as a methodology: The practices used to develop and deliver software solutions. Tools and processes like Agile, Waterfall, DevOps, Lean, HCD, SAFE, etc. “The SDLC is the whole process by which a digital solution is imagined, created, delivered, and evolved - to bring ongoing value to the people who use it.”
Guiding principles • Solutions must be valuable and usable: We listen to real users and measure success through adoption, not just delivery. • Speed is nothing without safety: We release small and often, backed by automation, QA, and safe-to-fail practices. • Change is expected: Systems must be built for adaptability - technically, culturally, and procedurally. • Quality is everyone’s job: Testing, review, and validation are embedded everyday activities, not infrequent and gated. • Collaboration beats handover: Designers, developers, testers, stakeholders and users work together throughout. It’s a team sport, not a relay. • Simplicity supports flow: Complexity breeds confusion, delay and waste. We reduce unnecessary gates, zones, and clutter. Just enough, just in time. Pillars of success • User-centred: We co-design with users and validate ideas early and often. • Outcome-focused: Success is measured through community value, not feature completion. • Continuous quality: Quality is integrated from the start, supported by automated testing and shared ownership. • Fast, safe releases: We deploy often, using strategies like canary, blue/green, and feature flags. • Smaller is smarter: We break work into manageable chunks for speed, learning, and risk control. • Sustainable change: Architecture, documentation, and ops are built for evolution—not just the next release. • Lean governance: Approval processes are right-sized to risk. Readiness is part of the work, not a gate at the end. • Environments that enable: Deployment pipelines and environments are streamlined for speed, safety, and self-service.
SDLC: A brief history Before we explore modern SDLC practices, it’s useful to reflect on how far we’ve come and why change is necessary. 1. 1970 – Waterfall model: Inspired by physical engineering, the Waterfall model followed strict, sequential phases with upfront requirements and rigid gates. It gave executives predictability and control - ideal for large, hierarchical environments. 2. 1980 – Iterative thinking: Barry Boehm’s Spiral Model introduced risk-driven loops and early forms of iteration -acknowledging that change is inevitable. 3. 1990 – Lightweight Methods: Frameworks like RAD and DSDM emerged to speed up delivery and reduce software bloat. These methods embraced prototyping and user involvement. 4. 2000 – The Agile manifesto: Seventeen practitioners met in Snowbird, Utah, and redefined delivery - valuing individuals, collaboration, and working software over documentation and process. 5. 2010 – Continuous delivery & DevOps: CI/CD pipelines, feature flags, and cloud-native tooling allowed teams to release frequently and safely. DevOps blurred the line between build, test, and operate - bringing a culture of shared responsibility. 6. 2020 – Human-centred, outcome-driven: Focus shifted from shipping features to delivering lasting value. HCD, lean experimentation, and product thinking became essential - especially in digital public service delivery. As we reflect on this journey, it’s clear that change isn’t just inevitable, it’s essential. Each evolution of the SDLC has responded to new challenges, technologies, and expectations. To deliver meaningful digital services today, our SDLC must remain flexible, human-centred, and ready to adapt. Practices over phases Historically, a ‘step by step’ phased approach has been taken to deliver software. While this suited mainframes, punch cards and burning CD-ROMs – it is no longer fit-for-purpose in today’s fast moving environment. Rather a ‘best practice’ mindset should be adopted that aligns with our goals and values. Practices aren’t linear steps to follow - they’re disciplines we cultivate to deliver and evolve digital solutions with confidence, quality, and care.
Core practices of SDLC These are not phases to follow, but practices we engage in - intentionally, iteratively, and continuously. Each practice reflects a mindset and discipline that supports sustainable, user-centred delivery. They can occur in parallel, loop back, or fade in and out depending on the context. What matters most is that each practice is actively engaged with, at the right time, for the right reason. 1. Discovering: Understanding people, problems, and possibilities through research and exploration. 2. Defining: Creating clarity of purpose, goals, priorities, and value. 3. Designing: Exploring and validating ideas through prototypes, feedback, and co-design. 4. Developing: Crafting working software through collaboration, code, and configuration. 5. Testing: Ensuring quality, usability, and safety throughout the delivery cycle. 6. Deploying: Delivering working software into the hands of users - early, safely, and often. 7. Operating: Supporting and observing live systems to ensure reliability, performance, and trust. 8. Learning: Measuring impact, gathering insights, and continuously improving outcomes. Let’s take a detailed look at each of these in turn, and review what methodologies, processes and tools best support each endeavour.
Discovering This practice includes activities like user research, service mapping, and assumption testing. It’s how we ensure we’re building the right thing, not just building things right. It’s not something that happens once at the start. It’s a mindset and capability that may re-emerge at any point when new information arises. • Helps avoid building the wrong solution to the right problem • Uncovers the true needs and pain points of users, not just stakeholder assumptions • Builds empathy, shared understanding, and alignment • Supports better prioritisation and reduces waste Desired outcomes vs anti-patterns Desired outcomes • A clear understanding of the problem space • Evidence of user needs, not just opinions • Shared vision across team and stakeholders • A backlog of validated opportunities or ideas • Reduced risk of building the wrong thing Anti-patterns to avoid • Skipping user discovery to “save time” • Treating discovery as a one-time phase • Confusing internal opinions with user needs • Using research to justify pre-decided solutions • Delegating discovery to one role or silo When we do it • At the beginning of a new initiative, feature, or policy shift • Whenever clarity is lacking. e.g., vague requirements or stakeholder misalignment • Before committing sizable resources to build • After deployment, to learn from failures or successes and refine understanding Tools & techniques • User research: Interviews, surveys, shadowing, contextual inquiry
• Problem framing: “How might we…” statements, hypothesis framing – best with real end users. • Service & experience mapping: Customer journey maps, service blueprints • Co-design & discovery workshops: Stakeholder alignment, brainstorming, empathy mapping • Assumption testing: Discovery spikes, experiments, paper prototypes • Personas & scenarios: Synthesised artefacts to guide empathy and design • Lean UX canvas or Opportunity canvas: Helps connect user needs to business goals Practice in action “During a Sprint Review, the team invited a small group of real users to view a rough prototype. Their feedback reframed the entire direction. What we thought was a complex workflow issue turned out to be a trust issue around terminology. We shifted our focus to language, simplified the UI, and solved the actual problem - not the assumed one.”
Defining Clarifying purpose, priorities, and direction so everyone pulls in the same direction. • Turns research insights into actionable direction • Helps teams focus on what matters most - outcomes, not just output • Aligns stakeholders, delivery teams, and users around shared goals • Supports meaningful prioritisation and lean delivery • Prevents drift, ambiguity, and gold-plating Desired outcomes vs anti-patterns Desired outcomes • A clearly articulated problem or opportunity statement • Shared understanding of user and business goals • A prioritised backlog of actionable work • Agreement on what “good” looks like (definition of done/success) • Trade-offs and constraints are visible and agreed upon Anti-patterns • Too much upfront detail (overplanning or solutioning too early) • Assumptions treated as facts • Priorities shaped by loudest voice, not greatest value • Defining done based on output (e.g., “code delivered”) not outcome When we do it • After early discovery, once needs are emerging • At the start of a new initiative, sprint, or roadmap cycle • When backlog becomes bloated, scattered, or misaligned • When team or stakeholder priorities conflict • When delivery is happening but no one’s sure why Tools & techniques • Lean Canvas or Opportunity Canvas • Problem statements and success criteria • User story mapping • MoSCoW or ICE prioritisation • Product vision statements and roadmaps
• Epic breakdown and backlog refinement • Value mapping and outcomes chains • Definition of Done (DoD) and Definition of Ready (DoR) • HCD alignment tools (e.g. value proposition canvas, JTBD) Practice in action “Ahead of Sprint Planning, the team used a Lean Canvas to map what they’d learned in discovery. It helped them define their problem space, uncover assumptions, and refine their backlog. A bloated epic was split into three smaller, outcome-aligned stories - with one idea dropped entirely. The team left the session confident, aligned, and focused.”
Designing Not to be confused with ‘graphic or interface design’, this practice is intended to explore and shape possible solutions through rapid, testable ideas. • Makes thinking visible - so we can test it, challenge it, and improve it • Allows us to experiment without the cost of building-big or gold-plating too early • Encourages fast, flexible collaboration across disciplines • Supports usability, accessibility, and clear communication • Helps validate direction - then evolve or discard as needed Desired outcomes vs anti-patterns Desired outcomes • Shared understanding of how users might interact with a solution • Clickable or testable prototypes grounded in real user needs • Lightweight artefacts that help guide, not dictate, development • Opportunities for teams and users to shape design direction • Confident decisions on what to build - or what not to Anti-patterns • Designs treated as final specs, not exploratory artefacts • Overdesigned visuals that assume correctness before testing • “Big reveal” designs created in isolation • Treating design as ownership of a single role or team • Clinging to unvalidated concepts because they “look done” When we do it • After discovery and early definition work • Before development starts - or in parallel with it • When we need to explore multiple options or approaches • As a lightweight way to reduce risk and gather feedback • When introducing new flows, content, or experiences Tools & techniques • Clickable, disposable prototypes (Figma, XD, InVision) • Interactive mockups that simulate key flows
• Co-design and sketching workshops • Design sprints and spike activities • Usability tests with rough prototypes • Early-adopter, A/B or Beta review releases • Storyboards, task flows, or experience maps • Wireframes annotated with user goals - not specs • Content-first or plain language design • Accessibility-by-design checklists Practice in action “When planning a new onboarding feature, the team ran a short design spike. They mocked up two rough prototypes in Figma and tested both with actual users. One was discarded completely. The other evolved through early-adopter feedback into a working design the devs could easily expand - without needing final artwork or pixel-perfect alignment.”
Developing Bringing ideas to life through working, testable, maintainable software. Developing as a disciplined, team-driven, iterative mindset focused on delivering working software, not just “writing features.” • Turns validated concepts into functional digital solutions • Embraces iteration, pairing, review, and continuous improvement • Balances speed with sustainability - building for now and later • Encourages team collaboration, not solo coding • Makes delivery smooth by integrating testing, documentation, and deployment readiness into the work Desired outcomes vs anti-patterns Desired outcomes • Clean, maintainable, and secure code • Working software delivered in small, testable increments • Peer-reviewed, version-controlled work with traceability • Automated tests integrated into the development workflow • Shared ownership and visibility across the team Anti-patterns • “Code complete” mentality with no thought for integration, testing, or release • Handover-based dev work (e.g. “the devs will build it then hand it over to xyz “) • Over-engineering or building for imagined future needs • No documentation, no review, no tests • Isolated developers with unclear context or purpose When we do it • As early as possible after defining the scope of a small, testable increment • In short loops - build, test, integrate, repeat • When prototypes or validated concepts are ready to evolve • Continuously, as part of CI/CD and iterative delivery models • During spikes to explore technical feasibility or patterns
Tools & techniques • Version control (e.g. Git, DevOps) and branching strategies (e.g. trunk-based, feature branching) • Peer programming or mob programming • Pull requests and code review practices • Test-driven development (TDD) and behaviour-driven development (BDD) • Static code analysis, linters, and style guides • Developer-driven documentation and README hygiene • Feature toggles and integration hooks • Dev containers, local dev environments, synthetic data and mock APIs • DevOps automation and pre-deployment validation tools Practice in action “During development of a new feature, the team committed to writing only what could be peer reviewed and tested in a single sprint. They paired up for complex logic, used feature toggles to keep incomplete work out of production, and integrated unit tests into their pull requests. This allowed them to merge confidently, release frequently, and avoid a long, risky bug-filled backlog. User feedback and validate was near real-time - they loved the collaboration and inclusion!”
Testing Building confidence by embedding quality into everything we do. Testing has evolved from a late-stage gate into a ‘shift-left’ continuous, integrated discipline. It’s no longer just about checking if things work - it’s about building confidence, safety, and quality into every step of delivery. • Ensures we’re delivering safe, usable, accessible, and reliable software • Catches issues early - when they’re easier and cheaper to fix • Supports faster delivery by making quality visible and actionable • Empowers teams to test continuously - not just at the end • Builds trust in the product, the process, and the team Desired outcomes vs anti-patterns Desired outcomes • Automated and manual tests aligned to real user scenarios • Fast, reliable feedback loops built into the CI/CD pipeline • Accessibility, security, and performance tested as part of delivery • Shared responsibility for quality - devs, testers, and designers collaborating • Testing seen as a learning and risk-reduction activity Anti-patterns • Testing left to the end of the project or sprint • Manual test cases with no automation or reuse • QA isolated from design, development, and delivery • Tests focused on “does it work” but ignoring “does it make sense” • Relying on users or production feedback as the main safety net When we do it • From day one - starting with testable designs and stories • During development using TDD, unit tests, and integration checks • At the point of merge or deployment through CI/CD • Post-release using synthetic monitoring or user analytics • Any time we need confidence that what we’re building is fit for purpose
Tools & techniques • Unit, integration, and end-to-end (E2E) automated testing • Test-driven development (TDD) or behaviour-driven development (BDD) • Manual exploratory testing and usability testing • Accessibility audits (e.g. axe, WAVE) • Security testing (e.g. dependency scans, penetration tests) • Performance and load testing tools (e.g. JMeter, k6) • CI/CD pipelines with automated test stages • Feature flags for safe testing in production • Session-based test management • Testing personas and real-device testing (e.g. BrowserStack, mobile labs) Practice in action “Rather than waiting for a QA cycle, the team wrote test cases alongside their stories. Developers created unit and integration tests in parallel with code. Testers joined refinement sessions and used exploratory testing mid-sprint. Bugs were caught early, risks were surfaced fast, and the team released with confidence daily.”
Deploying Moving working software into the hands of users - safely, confidently, and often. The reality is, many organisations find themselves stuck between rigid, legacy “change control” systems and the need for modern, continuous, safe delivery. The key is to honour the intent of these controls (for safety, transparency, accountability, etc) while reframing how we achieve them - through smaller, safer, automated, observable releases, rather than stage gates and big-bang deployments. • Makes improvements real - delivery isn’t done until it’s in production • Encourages frequent, low-risk releases instead of high-stakes big-bangs • Uses automated pipelines, feature flags, and progressive rollouts to reduce risk • Builds trust in process through transparency, observability, and rollback readiness • Streamlines approvals while still upholding quality and oversight Desired outcomes vs anti-patterns Desired outcomes • Small, incremental releases that are easy to review, test, and roll back • Automated deployment pipelines with built-in quality checks • Environments managed consistently across dev, test, and prod • Transparent release notes, audit trails, and changelogs • Approvals that are proportional to risk - not size or ceremony Anti-patterns • Delayed deployments due to excessive paperwork and handoffs • “Production readiness” treated as a static document, not a dynamic state • Risk increasing with size and frequency of releases • Environments that drift from each other or require manual workarounds • Relying on heroics or after-hours “go-lives” to push changes When we do it • Continuously, using automation and triggers (e.g. CI/CD pipeline) • After a change is tested and approved via pull request or automated checks • In response to feedback, user need, or hotfixes • As a learning tool - observing real usage and outcomes
• Even during discovery (via feature flags, internal betas, etc.) Tools & techniques • CI/CD platforms (e.g. Azure DevOps, GitHub Actions, GitLab, Jenkins) • Beta programs or internal dogfooding environments • Blue/green deployments, canary releases, and rolling updates • Feature flags and toggles for controlled rollouts • Deployment dashboards and logs with real-time visibility • Auto-generated changelogs and release notes • Infrastructure as code (IaC) and environment consistency tooling • Automated PR checks, readiness tickets, or policy gates • Lightweight “Certificate Ticket” models replacing heavy one-shot PRC docs Deployment strategies There are progressive strategies to consider over physical zones, inaccurate MPE builds and on-prem/cloud tool inconsistency. Strategy What it is When to use it Feature flags Toggle features on/off without changing code. Allows “shipping dark.” Deploy incomplete or experimental features safely Canary/beta releases Release to a small % of users first, monitor, then roll out more. Catch issues early with real usage and rollback control Blue/Green deploys Two mirrored environments. Switch traffic when new version is ready. Near-zero downtime deployments with instant rollback option Feature branch deploys Deploy directly from a feature branch to a non-prod env with full automation Validate changes in isolation with stakeholder feedback Dark launching Deploy features fully, but hide them behind conditions or flags Test performance, data flow, or analytics before revealing
Practice in action “Instead of one large release per quarter, the team moved to weekly deployments using automated pipelines. Each deployment included just a few changes, behind feature flags. Release approvals were based on lightweight risk-assessed ‘certificate tickets’ tied to automated tests and monitoring. Issues were rare, fast to fix, and entirely visible to stakeholders.”
Operating Keeping systems reliable, observable, and ready to grow. Operating isn’t where the work ends, but where the product proves itself, grows, and is refined in the real world. It’s where we uphold reliability and actively learn from real use. • Ensures the product performs under real-world conditions • Monitors usage, availability, and issues to drive proactive action • Keeps services safe, compliant, and up to date • Builds in learning loops and operational visibility - not just support tickets • Sets the stage for continuous evolution, not quiet abandonment Desired outcomes vs anti-patterns Desired outcomes • Systems are stable, secure, and observable • Operational metrics - not just help desk tickets - drive product decisions • Feedback from support, telemetry, and users feeds back into the backlog • Known issues are tracked, communicated, and improved • The product roadmap evolves with usage, policy, and performance data Anti-patterns • “BAU” seen as a holding pattern – ‘keeping the lights on’ • Reactive-only support with no root cause or trend analysis • Incidents quietly resolved without learning or visibility • Metrics that focus only on uptime, not usefulness or quality • Support and ops disconnected from the product team When we do it • Continuously, as part of responsible product stewardship Tools & techniques • Observability tooling (e.g. logging, metrics, traces) • Real-time monitoring dashboards (e.g. Datadog, Azure Monitor, Prometheus) • Uptime/performance alerts and SLO tracking • Synthetic user testing and behavioural analytics • Help desk ticketing systems and feedback portals • Ops runbooks, BCP/DR processes and incident response playbooks • Security patching pipelines and vulnerability scanning
• Product health checks and regular “ops reviews” • Data-informed backlog grooming (based on usage, not assumption) Practice in action “Prior to launch, the team created a dashboard to display user engagement, error rates, and service latency. Shortly after launch, performance dipped on a key form and telemetry flagged the issue within minutes. The team traced it to a browser compatibility problem and issued a patch. User complaints dropped, and the ops team suggested design improvements for the next sprint.”
Learning Making every release, risk, and result another chance to do better. Learning happens throughout the whole digital product lifecycle, though production is often where the most honest learning occurs. Real-world use reveals behaviours, expectations, and edge cases that no prototype can. That’s why we embrace safe experimentation - through POCs, feature flags, and constant micro-releases. And crucially SDLC seeks to maintain an open, respectful relationship with our users to ensure their feedback is heard and valued. • Turns delivery into discovery by measuring what matters • Creates unfiltered feedback loops between users, operations, support, and delivery • Encourages safe experimentation and course correction • Surfaces what’s working, what’s unclear, and what needs to change • Makes improvement a habit, not a post-mortem Desired outcomes vs anti-patterns Desired outcomes • Product decisions informed by usage, not just assumptions • Teams run retros, share learnings, and adapt regularly • User feedback is gathered, visible, and acted on • Hypotheses and experiments are tracked and reviewed • Learning loops exist at team, product, and portfolio levels Anti-patterns • “One and done” delivery with no iteration • Lessons captured but never applied • Feedback collected but siloed or ignored • Metrics that track vanity or compliance, not value • Blame culture or fear of exposing uncertainty When we do it • After releases, incidents, or delivery cycles • Continuously through usage data, monitoring, and analytics • When trusted user feedback loops highlight pain points or opportunities
• During designing and planning - via sprint reviews, experiments, spikes, and testing Tools & techniques • Sprint reviews and retrospectives • Post-incident reviews (blameless and constructive) • Product and usage analytics (e.g. GA, Mixpanel, Matomo) • User feedback forms, surveys, and direct interviews • Hypothesis tracking and experiment logs • Heatmaps, session replay, and interaction metrics • Feedback-driven backlog refinement • Learning reviews across product portfolios or org-wide initiatives • “You build it, you learn from it” accountability frameworks Practice in action “The team launched a new dashboard feature with two layout variants behind a feature flag. They monitored usage, engagement, and support tickets, and found that one variant had better task completion - but with some confusion on mobile. In the following Sprint Review, the Product Owner decided to combine the best elements of both, updated the design, and proposed a new User Story the following Sprint.”
SDLC – A practice with purpose By focusing on practical, value-driven disciplines rather than rigid steps, we create a delivery model that is flexible, sustainable, and human-centred. A well-functioning SDLC helps us: • Deliver real benefits to users, when they need them • Build maintainable, cost-effective products that evolve with confidence • Foster collaboration and shared responsibility across disciplines • Avoid the burnout, rework, and frustration that often comes from inflexible or outdated approaches Ultimately, an effective SDLC provides clarity of purpose, confidence in delivery, and pride in the products we create: built on values, measured by outcomes.