Software Development Life Cycle (SDLC): A Value-Driven Approach
TL;DR
Our SDLC is a value-driven framework for delivering and managing digital solutions that are fast, safe, and centered on real user needs. It balances strategy, structure, and agility - empowering teams to build meaningful, evolving products through continuous quality, collaboration, and purpose.
Start with WHY
To deliver and manage better digital solutions - faster, safer, and with purpose.
Driven by our values
At its heart, an SDLC is not just about writing code or following process. It’s about solving real problems for real people - through technology that’s responsive, inclusive, and sustainable.
- Customers first: We build with, not just for, our users - placing their needs at the centre of every decision.
- Ideas in action: We test, learn, and adapt quickly - embracing iteration and innovation to deliver what works.
- Unleash potential: We empower teams to own their craft, grow their capabilities, and challenge the status quo.
- Be courageous: We challenge outdated practices and simplify where complexity holds us back.
- Empower people: We build systems that serve people - our colleagues, stakeholders, and the communities we support.
What is a SDLC?
Before we try to define how to implement SDLC, we should unpack what it is.
- Software: Not just code, but a digital product created to solve real problems.
- Delivery: The act of placing something tangible and desirable into the hands of those who want it, when they need it.
- Life: The digital product is alive. It does not exist in a vacuum. It should evolve, adapt and respond to its environment.
- Cycle: Development is not linear but cyclic. It will move and loop through various phases during its existence.
SDLC can be described as a combination of strategy, framework, and methodology.
- SDLC as a strategy: The organisational intent and purpose behind how products are delivered sustainably and successfully.
- SDLC as a framework: The structure that outlines the stages of software development from idea to retirement.
- SDLC as a methodology: The practices used to develop and deliver software solutions. Tools and processes like Agile, Waterfall, DevOps, Lean, HCD, SAFE, etc.
“The SDLC is the whole process by which a digital solution is imagined, created, delivered, and evolved - to bring ongoing value to the people who use it.”
Guiding Principles
- Solutions must be valuable and usable: We listen to real users and measure success through adoption, not just delivery.
- Speed is nothing without safety: We release small and often, backed by automation, QA, and safe-to-fail practices.
- Change is expected: Systems must be built for adaptability - technically, culturally, and procedurally.
- Quality is everyone’s job: Testing, review, and validation are embedded everyday activities, not infrequent and gated.
- Collaboration beats handover: Designers, developers, testers, stakeholders, and users work together throughout. It’s a team sport, not a relay.
- Simplicity supports flow: Complexity breeds confusion, delay, and waste. We reduce unnecessary gates, zones, and clutter. Just enough, just in time.
Pillars of Success
- User-centred: We co-design with users and validate ideas early and often.
- Outcome-focused: Success is measured through community value, not feature completion.
- Continuous quality: Quality is integrated from the start, supported by automated testing and shared ownership.
- Fast, safe releases: We deploy often, using strategies like canary, blue/green, and feature flags.
- Smaller is smarter: We break work into manageable chunks for speed, learning, and risk control.
- Sustainable change: Architecture, documentation, and ops are built for evolution—not just the next release.
- Lean governance: Approval processes are right-sized to risk. Readiness is part of the work, not a gate at the end.
- Environments that enable: Deployment pipelines and environments are streamlined for speed, safety, and self-service.
SDLC: A Brief History
Before we explore modern SDLC practices, it’s useful to reflect on how far we’ve come and why change is necessary.
- 1970 – Waterfall model: Inspired by physical engineering, the Waterfall model followed strict, sequential phases with upfront requirements and rigid gates. It gave executives predictability and control - ideal for large, hierarchical environments.
- 1980 – Iterative thinking: Barry Boehm’s Spiral Model introduced risk-driven loops and early forms of iteration - acknowledging that change is inevitable.
- 1990 – Lightweight Methods: Frameworks like RAD and DSDM emerged to speed up delivery and reduce software bloat. These methods embraced prototyping and user involvement.
- 2000 – The Agile manifesto: Seventeen practitioners met in Snowbird, Utah, and redefined delivery - valuing individuals, collaboration, and working software over documentation and process.
- 2010 – Continuous delivery & DevOps: CI/CD pipelines, feature flags, and cloud-native tooling allowed teams to release frequently and safely. DevOps blurred the line between build, test, and operate - bringing a culture of shared responsibility.
- 2020 – Human-centred, outcome-driven: Focus shifted from shipping features to delivering lasting value. HCD, lean experimentation, and product thinking became essential - especially in digital public service delivery.
As we reflect on this journey, it’s clear that change isn’t just inevitable, it’s essential. Each evolution of the SDLC has responded to new challenges, technologies, and expectations. To deliver meaningful digital services today, our SDLC must remain flexible, human-centred, and ready to adapt.
Practices Over Phases
Historically, a ‘step by step’ phased approach has been taken to deliver software. While this suited mainframes, punch cards, and burning CD-ROMs – it is no longer fit-for-purpose in today’s fast-moving environment. Rather a ‘best practice’ mindset should be adopted that aligns with our goals and values.
Practices aren’t linear steps to follow - they’re disciplines we cultivate to deliver and evolve digital solutions with confidence, quality, and care.
Core Practices of SDLC
These are not phases to follow, but practices we engage in - intentionally, iteratively, and continuously. Each practice reflects a mindset and discipline that supports sustainable, user-centred delivery. They can occur in parallel, loop back, or fade in and out depending on the context.
- Discovering: Understanding people, problems, and possibilities through research and exploration.
- Defining: Creating clarity of purpose, goals, priorities, and value.
- Designing: Exploring and validating ideas through prototypes, feedback, and co-design.
- Developing: Crafting working software through collaboration, code, and configuration.
- Testing: Ensuring quality, usability, and safety throughout the delivery cycle.
- Deploying: Delivering working software into the hands of users - early, safely, and often.
- Operating: Supporting and observing live systems to ensure reliability, performance, and trust.
- Learning: Measuring impact, gathering insights, and continuously improving outcomes.
Let’s take a detailed look at each of these in turn, and review what methodologies, processes, and tools best support each endeavour.
Discovering
This practice includes activities like user research, service mapping, and assumption testing. It’s how we ensure we’re building the right thing, not just building things right.
Desired outcomes vs anti-patterns
- Desired outcomes:
- A clear understanding of the problem space
- Evidence of user needs, not just opinions
- Shared vision across team and stakeholders
- A backlog of validated opportunities or ideas
- Reduced risk of building the wrong thing
- Anti-patterns to avoid:
- Skipping user discovery to “save time”
- Treating discovery as a one-time phase
- Confusing internal opinions with user needs
- Using research to justify pre-decided solutions
- Delegating discovery to one role or silo
When we do it:
- At the beginning of a new initiative, feature, or policy shift
- Whenever clarity is lacking, e.g., vague requirements or stakeholder misalignment
- Before committing sizable resources to build
- After deployment, to learn from failures or successes and refine understanding
Tools & techniques:
- User research: Interviews, surveys, shadowing, contextual inquiry
- Problem framing: “How might we…” statements, hypothesis framing – best with real end users.
- Service & experience mapping: Customer journey maps, service blueprints
- Co-design & discovery workshops: Stakeholder alignment, brainstorming, empathy mapping
- Assumption testing: Discovery spikes, experiments, paper prototypes
- Personas & scenarios: Synthesised artefacts to guide empathy and design
- Lean UX canvas or Opportunity canvas: Helps connect user needs to business goals
Practice in action:
“During a Sprint Review, the team invited a small group of real users to view a rough prototype. Their feedback reframed the entire direction. What we thought was a complex workflow issue turned out to be a trust issue around terminology. We shifted our focus to language, simplified the UI, and solved the actual problem - not the assumed one.”
Defining
Clarifying purpose, priorities, and direction so everyone pulls in the same direction.
Desired outcomes vs anti-patterns
- Desired outcomes:
- A clearly articulated problem or opportunity statement
- Shared understanding of user and business goals
- A prioritised backlog of actionable work
- Agreement on what “good” looks like (definition of done/success)
- Trade-offs and constraints are visible and agreed upon
- Anti-patterns:
- Too much upfront detail (overplanning or solutioning too early)
- Assumptions treated as facts
- Priorities shaped by the loudest voice, not greatest value
- Defining done based on output (e.g., “code delivered”) not outcome
When we do it:
- After early discovery, once needs are emerging
- At the start of a new initiative, sprint, or roadmap cycle
- When backlog becomes bloated, scattered, or misaligned
- When team or stakeholder priorities conflict
- When delivery is happening but no one’s sure why
Tools & techniques:
- Lean Canvas or Opportunity Canvas
- Problem statements and success criteria
- User story mapping
- MoSCoW or ICE prioritisation
- Product vision statements and roadmaps
- Epic breakdown and backlog refinement
- Value mapping and outcomes chains
- Definition of Done (DoD) and Definition of Ready (DoR)
- HCD alignment tools (e.g., value proposition canvas, JTBD)
Practice in action:
“Ahead of Sprint Planning, the team used a Lean Canvas to map what they’d learned in discovery. It helped them define their problem space, uncover assumptions, and refine their backlog. A bloated epic was split into three smaller, outcome-aligned stories - with one idea dropped entirely. The team left the session confident, aligned, and focused.”
Designing
Not to be confused with ‘graphic or interface design’, this practice is intended to explore and shape possible solutions through rapid, testable ideas.
Desired outcomes vs anti-patterns
- Desired outcomes:
- Shared understanding of how users might interact with a solution
- Clickable or testable prototypes grounded in real user needs
- Lightweight artefacts that help guide, not dictate, development
- Opportunities for teams and users to shape design direction
- Confident decisions on what to build - or what not to
- Anti-patterns:
- Designs treated as final specs, not exploratory artefacts
- Overdesigned visuals that assume correctness before testing
- “Big reveal” designs created in isolation
- Treating design as ownership of a single role or team
- Clinging to unvalidated concepts because they “look done”
When we do it:
- After discovery and early definition work
- Before development starts - or in parallel with it
- When we need to explore multiple options or approaches
- As a lightweight way to reduce risk and gather feedback
- When introducing new flows, content, or experiences
Tools & techniques:
- Clickable, disposable prototypes (Figma, XD, InVision)
- Interactive mockups that simulate key flows
- Co-design and sketching workshops
- Design sprints and spike activities
- Usability tests with rough prototypes
- Early-adopter, A/B or Beta review releases
- Storyboards, task flows, or experience maps
- Wireframes annotated with user goals - not specs
- Content-first or plain language design
- Accessibility-by-design checklists
Practice in action:
“When planning a new onboarding feature, the team ran a short design spike. They mocked up two rough prototypes in Figma and tested both with actual users. One was discarded completely. The other evolved through early-adopter feedback into a working design the devs could easily expand - without needing final artwork or pixel-perfect alignment.”
Developing
Bringing ideas to life through working, testable, maintainable software. Developing as a disciplined, team-driven, iterative mindset focused on delivering working software, not just “writing features.”
Desired outcomes vs anti-patterns
- Desired outcomes:
- Clean, maintainable, and secure code
- Working software delivered in small, testable increments
- Peer-reviewed, version-controlled work with traceability
- Automated tests integrated into the development workflow
- Shared ownership and visibility across the team
- Anti-patterns:
- “Code complete” mentality with no thought for integration, testing, or release
- Handover-based dev work (e.g. “the devs will build it then hand it over to xyz “)
- Over-engineering or building for imagined future needs
- No documentation, no review, no tests
- Isolated developers with unclear context or purpose
When we do it:
- As early as possible after defining the scope of a small, testable increment
- In short loops - build, test, integrate, repeat
- When prototypes or validated concepts are ready to evolve
- Continuously, as part of CI/CD and iterative delivery models
- During spikes to explore technical feasibility or patterns
Tools & techniques:
- Version control (e.g., Git, DevOps) and branching strategies (e.g., trunk-based, feature branching)
- Peer programming or mob programming
- Pull requests and code review practices
- Test-driven development (TDD) and behaviour-driven development (BDD)
- Static code analysis, linters, and style guides
- Developer-driven documentation and README hygiene
- Feature toggles and integration hooks
- Dev containers, local dev environments, synthetic data and mock APIs
- DevOps automation and pre-deployment validation tools
Practice in action:
“During development of a new feature, the team committed to writing only what could be peer-reviewed and tested in a single sprint. They paired up for complex logic, used feature toggles to keep incomplete work out of production, and integrated unit tests into their pull requests. This allowed them to merge confidently, release frequently, and avoid a long, risky bug-filled backlog. User feedback and validate was near real-time - they loved the collaboration and inclusion!”
Testing
Building confidence by embedding quality into everything we do. Testing has evolved from a late-stage gate into a ‘shift-left’ continuous, integrated discipline. It’s no longer just about checking if things work - it’s about building confidence, safety, and quality into every step of delivery.
Desired outcomes vs anti-patterns
- Desired outcomes:
- Automated and manual tests aligned to real user scenarios
- Fast, reliable feedback loops built into the CI/CD pipeline
- Accessibility, security, and performance tested as part of delivery
- Shared responsibility for quality - devs, testers, and designers collaborating
- Testing seen as a learning and risk-reduction activity
- Anti-patterns:
- Testing left to the end of the project or sprint
- Manual test cases with no automation or reuse
- QA isolated from design, development, and delivery
- Tests focused on “does it work” but ignoring “does it make sense”
- Relying on users or production feedback as the main safety net
When we do it:
- From day one - starting with testable designs and stories
- During development using TDD, unit tests, and integration checks
- At the point of merge or deployment through CI/CD
- Post-release using synthetic monitoring or user analytics
- Any time we need confidence that what we’re building is fit for purpose
Tools & techniques:
- Unit, integration, and end-to-end (E2E) automated testing
- Test-driven development (TDD) or behaviour-driven development (BDD)
- Manual exploratory testing and usability testing
- Accessibility audits (e.g., axe, WAVE)
- Security testing (e.g., dependency scans, penetration tests)
- Performance and load testing tools (e.g., JMeter, k6)
- CI/CD pipelines with automated test stages
- Feature flags for safe testing in production
- Session-based test management
- Testing personas and real-device testing (e.g., BrowserStack, mobile labs)
Practice in action:
“Rather than waiting for a QA cycle, the team wrote test cases alongside their stories. Developers created unit and integration tests in parallel with code. Testers joined refinement sessions and used exploratory testing mid-sprint. Bugs were caught early, risks were surfaced fast, and the team released with confidence daily.”
Deploying
Moving working software into the hands of users - safely, confidently, and often. The reality is, many organisations find themselves stuck between rigid, legacy “change control” systems and the need for modern, continuous, safe delivery. The key is to honour the intent of these controls (for safety, transparency, accountability, etc.) while reframing how we achieve them - through smaller, safer, automated, observable releases, rather than stage gates and big-bang deployments.
Desired outcomes vs anti-patterns
- Desired outcomes:
- Small, incremental releases that are easy to review, test, and roll back
- Automated deployment pipelines with built-in quality checks
- Environments managed consistently across dev, test, and prod
- Transparent release notes, audit trails, and changelogs
- Approvals that are proportional to risk - not size or ceremony
- Anti-patterns:
- Delayed deployments due to excessive paperwork and handoffs
- “Production readiness” treated as a static document, not a dynamic state
- Risk increasing with size and frequency of releases
- Environments that drift from each other or require manual workarounds
- Relying on heroics or after-hours “go-lives” to push changes
When we do it:
- Continuously, using automation and triggers (e.g., CI/CD pipeline)
- After a change is tested and approved via pull request or automated checks
- In response to feedback, user need, or hotfixes
- As a learning tool - observing real usage and outcomes
- Even during discovery (via feature flags, internal betas, etc.)
Tools & techniques:
- CI/CD platforms (e.g., Azure DevOps, GitHub Actions, GitLab, Jenkins)
- Beta programs or