Edit Sticky HomeDevelopment

Software Development Life Cycle (SDLC)

A high level outline of it would be as follow:

The cycle is as follow.

Product management phase (ongoing process)

  • Define Requirements - Define the requirements for the iteration based on the product backlog, sprint backlog, customer and stakeholder feedback.
  • Design and Strategy - Design UI/UX based on defined requirements and form an initial technical strategy/plan for the project. This step also includes writing user stories for the features.
  • Security and Privacy - Highlight and emphasize the parts where security is critical, to ensure no security breach and no privacy leaks or data shared outside of the controled environment.

Software Development Lifecycle (recurring loop)

  • Milestone plan - Pick the top priority features that are part of this phase of the project and plan them.
  • Development - Develop software based on defined requirements locally and testing in development environments
  • QA (Quality Assurance) testing - Continuous internal and external testing based on the user stories and design mockups.
  • Release to staging - Release stable features to staging server
  • Performance or Security testing - For large releases we will perform performance testing and security testing, usually in the staging environment.
  • UAT (User Acceptance Testing) and Validation Testing - UAT often onvolves testing by the people from the intended audience and recording and correcting of any defects which are discovered. Validation testing can be done by the project stakeholders who are familiar with the project scope to ensure product meets the requirements.
  • Release - Integrate and deliver the working iteration into production
  • Data analysis - Review the available data points from production environments to analyze the effect of the release (see more in post release section below).

Throughout the cycle the following is continuously performed:

  • Unit testings - Test the integrated unit to ensure it is producing expected output against given input.
    We are currently using the following:
  • Code coverage - To oultine code not covered by the test suite.
    We are currently using the following:
  • CI (Continuous Integration) - Test if a group of components are combined to produce output.
    We are currently using the following:
  • CD (Continuous Delivery) - Automated release process.
    We are currently using the following:
  • Performance - We test the behavior of the system under heavy load to identify bottlenecks and validate stability.
    We are currently using the following:
  • Security - We continuously maintain the code base for expired dependencies, and run frequent checks to ensure compliance with best practices:

Shipping

  • Tag and Changelog - Outlining changes for release and providing a link to a specific version of the code.
  • Documentation & updates - Anything relevant to the released version such as (list non exhaustive):
    • Architecture changes
    • New requirements for deployment
    • Sequence diagrams
    • API documentation
  • Internal & external sign off - Ensure internal and external kickoff for release.

On top of this, the following are taking place before major production releases:

  • Static code security scans - To check everything “under the hood” without actually executing the code to detect basic flaws and code sanity.
  • Vulnerability scans - Scan the apps for security vulnerability such as Cross-site scripting, SQL Injection and insecure server configuration.
  • Regression testing - Test the new version to ensure that any change or addition hasn’t broken any existing functionality.

For major releases, and to ensure impartiality of the results, we leverage the client’s internal IT team or external vendor (e.g. Cigital).

Note:

  • In practice there are multiple smaller loops and feedback channels in each step to allow reacting faster to new circumstances and requirements.

Post release

Post release we gather, collect and analyze various sources, work it into the requirements of the next iteration or react to it in real time.

  • Customer and stakeholder feedback - Collect feedback from users and product stakeholders.
  • Analytics - We collect and report on user behavior (e.g. Google Analytics, Google Data Studio, Tableau, etc.)
  • User errors - Automated reports and monitoring on user errors and crashes (e.g. Fabric.io, Sentry
  • System metrics - Processing system level time-series data and infrastructure monitoring (e.g. TICK stack)
  • Log pipeline - Log collection and analysis (e.g. ELK stack)

Additionally for mobile apps.

  • User reviews - Reviewing rating and comments on major stores (e.g. AppFollow for Google Play and App Store)